Science.gov

Sample records for large scale earthquakes

  1. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).

  4. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  5. Earthquake triggering and large-scale geologic storage of carbon dioxide

    PubMed Central

    Zoback, Mark D.; Gorelick, Steven M.

    2012-01-01

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO2 emissions associated with coal-based electrical power generation and other industrial sources of CO2 [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185–5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO2 into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO2 repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  6. Earthquake triggering and large-scale geologic storage of carbon dioxide.

    PubMed

    Zoback, Mark D; Gorelick, Steven M

    2012-06-26

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO(2) emissions associated with coal-based electrical power generation and other industrial sources of CO(2) [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185-5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO(2) into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO(2) repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions.

  7. Types of hydrogeological response to large-scale explosions and earthquakes

    NASA Astrophysics Data System (ADS)

    Gorbunova, Ella; Vinogradov, Evgeny; Besedina, Alina; Martynov, Vasilii

    2017-04-01

    Hydrogeological response to anthropogenic and natural impact indicates massif properties and mode of deformation. We studied uneven-aged aquifers that had been unsealed at the Semipalatinsk testing area (Kazakhstan) and geophysical observatory "Mikhnevo" at the Moscow region (Russia). Data was collected during long-term underground water monitoring that was carried out in 1983-1989 when large-scale underground nuclear explosions were realized. Precise observations of underground water response to distant earthquakes waves passage at GPO "Mikhnevo" have been conducted since 2008. One of the goals of the study was to mark out main types of either dynamic or irreversible spatial-temporal underground water response to large-scale explosions and to compare them with those of earthquakes impact as it had been presented in different papers. As far as nobody really knows hydrogeological processes that occur at the earthquake source it's especially important to analyze experimental data of groundwater level variations that was carried close to epicenter first minutes to hours after explosions. We found that hydrogeodynamic reaction strongly depends on initial geological and hydrogeological conditions as far as on seismic impact parameters. In the near area post-dynamic variations can lead to either excess pressure dome or depression cone forming that results of aquifer drainage due to rock massif fracturing. In the far area explosion effect is comparable with the one of distant earthquake and provides dynamic water level oscillations. Precise monitoring at the "Mikhnevo" area was conducted in the platform conditions far from active faults thus we consider it as a purely calm area far from earthquake sources. Both dynamic and irreversible water level change seem to form power dependence on vertical peak ground displacement velocity due to wave passage. Further research will be aimed at transition close-to-far area to identify a criterion that determines either irreversible

  8. Segmentation and Large-Scale Nucleation of the 2014 Pisagua Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Lengliné, O.; Luo, Y.; Durand, V.; Ruiz, J. A.

    2014-12-01

    The 2014 Mw 8.1 Pisagua, Chile earthquake featured a remarkable foreshock sequence. Foreshock migration at speed similar to 2011 Tohoku foreshocks and slow slip events has been interpreted as driven by aseismic slip, instead of a cascade (possibly mediated by aseismic afterslip). However, current analyses of geodetic and tiltmeter data illustrate the challenge to discriminate between pre-slip and cascade models through in-land deformation observations. Here we attempt to distinguish these models via seismic observations only, identify evidence of a large nucleation process and discuss its implications for fault friction and earthquake predictability. We analyze patterns of foreshock activity, including repeating earthquake sequences identified via waveform similarity on the nearest IPOC stations. We find that repeaters accelerate close in time and space to large foreshocks and their recurrence times decay following Omori's law. This favors the cascade model, in which repeaters are aftershocks of foreshocks. Limitations of current catalogs impede more quantitative tests. Direct evidence of large-scale nucleation is a separate foreshock cluster north of the mainshock hypocenter whose activity is independent of the dominant southern swarm. Over almost one year, swarms occurred near the southern and northern tips of the mainshock rupture and near the southern end of the largest aftershock. These clusters are located near boundaries of relatively high seismic coupling inferred from geodesy. A picture of a segmented megathrust emerges where regions of aseismic deformation mark the boundaries between fault segments and produce earthquake swarms. Earthquake cycle models indicate aseismic slip slowly penetrates from segment boundaries into locked regions, until a critical distance is reached and unstable rupture (seismic or aseismic) follows. This view is further supported by temporal changes of foreshock swarm migration speed. The early swarm migration distance provides

  9. Large scale dynamic rupture scenario of the 2004 Sumatra-Andaman megathrust earthquake

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Madden, Elizabeth H.; Wollherr, Stephanie; Gabriel, Alice A.

    2016-04-01

    The Great Sumatra-Andaman earthquake of 26 December 2004 is one of the strongest and most devastating earthquakes in recent history. Most of the damage and the ~230,000 fatalities were caused by the tsunami generated by the Mw 9.1-9.3 event. Various finite-source models of the earthquake have been proposed, but poor near-field observational coverage has led to distinct differences in source characterization. Even the fault dip angle and depth extent are subject to debate. We present a physically realistic dynamic rupture scenario of the earthquake using state-of-the-art numerical methods and seismotectonic data. Due to the lack of near-field observations, our setup is constrained by the overall characteristics of the rupture, including the magnitude, propagation speed, and extent along strike. In addition, we incorporate the detailed geometry of the subducting fault using Slab1.0 to the south and aftershock locations to the north, combined with high-resolution topography and bathymetry data.The possibility of inhomogeneous background stress, resulting from the curved shape of the slab along strike and the large fault dimensions, is discussed. The possible activation of thrust faults splaying off the megathrust in the vicinity of the hypocenter is also investigated. Dynamic simulation of this 1300 to 1500 km rupture is a computational and geophysical challenge. In addition to capturing the large-scale rupture, the simulation must resolve the process zone at the rupture tip, whose characteristic length is comparable to smaller earthquakes and which shrinks with propagation distance. Thus, the fault must be finely discretised. Moreover, previously published inversions agree on a rupture duration of ~8 to 10 minutes, suggesting an overall slow rupture speed. Hence, both long temporal scales and large spatial dimensions must be captured. We use SeisSol, a software package based on an ADER-DG scheme solving the spontaneous dynamic earthquake rupture problem with high

  10. DYNAMIC BEHAVIOR OF CONCRETE GRAVITY DAM ON JOINTED ROCK FOUNDATION DURING LARGE-SCALE EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Kimata, Hiroyuki; Fujita, Yutaka; Horii, Hideyuki; Yazdani, Mahmoud

    Dynamic cracking analysis of concrete gravity dam has been carried out during large-scale earthquake, considering the progressive failure of jointed rock foundation. Firstly, in order to take into account the progressive failure of rock foundation, the constitutive law of jointed rock is assumed and its validity is evaluated by simulation analysis based on the past experimental model. Finally, dynamic cracking analysis of 100-m high dam model is performed, using the previously proposed approach with tangent stiffness-proportional damping to express the propagation behavior of crack and the constitutive law of jointed rock. The crack propagation behavior of dam body and the progressive failure of jointed rock foundation are investigated.

  11. Large-scale Slow Slip Event Preceding the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Koketsu, K.; Yokota, Y.

    2013-12-01

    We carried out inversions of annual GEONET data (F3 displacements) observed by the Geospatial Information Authority of Japan from the opening of GEONET in 1996 to the 2011 Tohoku earthquake. We then obtained annual backslip (slip deficit) rate distributions, finding that the backslip had weakened and migrated at some time in 2002 or 2003 (Koketsu, Yokota, N. Kato, and T. Kato, 2012 AGU fall meeting). In this study, we are going back to the original GEONET data and examine whether the weakening and migration of backslip included in them were stationary or not. If they are confirmed to be stationary, we can relate them to a large-scale slow slip event. Since similar phenomena occurred from 2001 to 2004 in the Tokai district of Japan, we analyze the original GEONET data using such a method as Ozawa et al. (2002) applied for the Tokai phenomena. However, as the seismicity in the Tohoku and Kanto districts is higher than in the Tokai district, we have corrected the original data in advance by removing not only annual sinusoidal variations but also the effects of M 6 to 8 earthquakes. We first choose four GEONET stations where large weakening is observed, and derive average trends from the corrected data in 1996 to 2001 after we confirm stationary displacement rates during the period. When we subtract the average trends from the corrected data as shown in Fig. 1, we find flat lines up to some time in 2002 or 2003 and then eastward displacements in opposite direction to the backslip. Therefore, we perform regression analyses to obtain an inflection point between the flat lines and eastward displacements. The result indicates the inflection point to be located in May 2002. If the rates of the eastward displacements after May 2002 look stationary, a slow slip event can be considered to occur as in the Tokai district. Their plots actually show stationary rates except for increase at the time of the 2005 Miyagi-oki earthquake. We next perform the same analyses of the

  12. Parallel octree-based multiresolution mesh method for large-scale earthquake ground motion simulation

    NASA Astrophysics Data System (ADS)

    Kim, Eui Joong

    Large scale ground motion simulation requires supercomputing systems in order to obtain reliable and useful results within reasonable elapsed time. In this study, we develop a framework for terascale ground motion simulations in highly heterogeneous basins. As part of the development, we present a parallel octree-based multiresolution finite element methodology for the elastodynamic wave propagation problem. The octree-based multiresolution finite element method reduces memory use significantly and improves overall computational performance. The framework is comprised of three parts; (1) an octree-based mesh generator, Euclid developed by TV and O'Hallaron, (2) a parallel mesh partitioner, ParMETIS developed by Karypis et al.[2], and (3) a parallel octree-based multiresolution finite element solver, QUAKE developed in this study. Realistic earthquakes parameters, soil material properties, and sedimentary basins dimensions will produce extremely large meshes. The out-of-core versional octree-based mesh generator, Euclid overcomes the resulting severe memory limitations. By using a parallel, distributed-memory graph partitioning algorithm, ParMETIS partitions large meshes, overcoming the memory and cost problem. Despite capability of the Octree-Based Multiresolution Mesh Method ( OBM3), large problem sizes necessitate parallelism to handle large memory and work requirements. The parallel OBM 3 elastic wave propagation code, QUAKE has been developed to address these issues. The numerical methodology and the framework have been used to simulate the seismic response of both idealized systems and of the Greater Los Angeles basin to simple pulses and to a mainshock of the 1994 Northridge Earthquake, for frequencies of up to 1 Hz and domain size of 80 km x 80 km x 30 km. In the idealized models, QUAKE shows good agreement with the analytical Green's function solutions. In the realistic models for the Northridge earthquake mainshock, QUAKE qualitatively agrees, with at most

  13. Earthquake Scaling, Simulation and Forecasting

    NASA Astrophysics Data System (ADS)

    Sachs, Michael Karl

    Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tohoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events. In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional

  14. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.; Ruppert, S.

    2002-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by analyzing aftershock sequences in the Western U.S. and Turkey using two different techniques. First we examine the observed regional S-wave spectra by fitting with a parametric model (Walter and Taylor, 2002) with and without variable stress drop scaling. Because the aftershock sequences have common stations and paths we can examine the S-wave spectra of events by size to determine what type of apparent stress scaling, if any, is most consistent with the data. Second we use regional coda envelope techniques (e.g. Mayeda and Walter, 1996; Mayeda et al, 2002) on the same events to directly measure energy and moment. The coda techniques corrects for path and site effects using an empirical Green function technique and independent calibration with surface wave derived moments. Our hope is that by carefully analyzing a very large number of events in a consistent manner using two different techniques we can start to resolve this apparent stress scaling issue. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  15. Reconsidering earthquake scaling

    USGS Publications Warehouse

    Gomberg, Joan S.; Wech, Aaron G.; Creager, Kenneth; Obara, K.; Agnew, Duncan

    2016-01-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  16. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  17. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  18. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  19. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  20. Simulating Large-Scale Earthquake Dynamic Rupture Scenarios On Natural Fault Zones Using the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2014-05-01

    In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.

  1. Scaling behavior of the earthquake intertime distribution: influence of large shocks and time scales in the Omori law.

    PubMed

    Lippiello, Eugenio; Corral, Alvaro; Bottiglieri, Milena; Godano, Cataldo; de Arcangelis, Lucilla

    2012-12-01

    We present a study of the earthquake intertime distribution D(Δt) for a California catalog in temporal periods of short duration T. We compare experimental results with theoretical predictions and analytical approximate solutions. For the majority of intervals, rescaling intertimes by the average rate leads to collapse of the distributions D(Δt) on a universal curve, whose functional form is well fitted by a Gamma distribution. The remaining intervals, exhibiting a more complex D(Δt), are all characterized by the presence of large shocks. These results can be understood in terms of the relevance of the ratio between the characteristic time c in the Omori law and T: Intervals with Gamma-like behavior are indeed characterized by a vanishing c/T. The above features are also investigated by means of numerical simulations of the Epidemic Type Aftershock Sequence (ETAS) model. This study shows that collapse of D(Δt) is also observed in numerical catalogs; however, the fit with a Gamma distribution is possible only assuming that c depends on the main-shock magnitude m. This result confirms that the dependence of c on m, previously observed for m>6 main shocks, extends also to small m>2.

  2. Identification of elastic basin properties by large-scale inverse earthquake wave propagation

    NASA Astrophysics Data System (ADS)

    Epanomeritakis, Ioannis K.

    The importance of the study of earthquake response, from a social and economical standpoint, is a major motivation for the current study. The severe uncertainties involved in the analysis of elastic wave propagation in the interior of the earth increase the difficulty in estimating earthquake impact in seismically active areas. The need for recovery of information about the geological and mechanical properties of underlying soils motivates the attempt to apply inverse analysis on earthquake wave propagation problems. Inversion for elastic properties of soils is formulated as an constrained optimization problem. A series of trial mechanical soil models is tested against a limited-size set of dynamic response measurements, given partial knowledge of the target model and complete information on source characteristics, both temporal and geometric. This inverse analysis gives rise to a powerful method for recovery of a material model that produces the given response. The goal of the current study is the development of a robust and efficient computational inversion methodology for material model identification. Solution methods for gradient-based local optimization combine with robustification and globalization techniques to build an effective inversion framework. A Newton-based approach deals with the complications of the highly nonlinear systems generated in the inversion solution process. Moreover, a key addition to the inversion methodology is the application of regularization techniques for obtaining admissible soil models. Most importantly, the development and use of a multiscale strategy offers globalizing and robustifying advantages to the inversion process. In this study, a collection of results of inversion for different three-dimensional Lame moduli models is presented. The results demonstrate the effectiveness of the inversion methodology proposed and provide evidence for its capabilities. They also show the path for further study of elastic property

  3. From M8 to CyberShake: Using Large-Scale Numerical Simulations to Forecast Earthquake Ground Motions (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Cui, Y.; Olsen, K. B.; Graves, R. W.; Maechling, P. J.; Day, S. M.; Callaghan, S.; Milner, K.; Scec/Cme Collaboration

    2010-12-01

    Large earthquakes cannot be reliably and skillfully predicted in terms of their location, time, and magnitude. However, numerical simulations of seismic radiation from complex fault ruptures and wave propagation through 3D crustal structures have now advanced to the point where they can usefully predict the strong ground motions from anticipated earthquake sources. We describe a set of four computational pathways employed by the Southern California Earthquake Center (SCEC) to execute and validate these simulations. The methods are illustrated using the largest earthquakes anticipated on the southern San Andreas fault system. A dramatic example is the recent M8 dynamic-rupture simulation by Y. Cui, K. Olsen et al. (2010) of a magnitude-8 “wall-to-wall” earthquake on southern San Andreas fault, calculated to seismic frequencies of 2-Hz on a computational grid of 436 billion elements. M8 is the most ambitious earthquake simulation completed to date; the run took 24 hours on 223K cores of the NCCS Jaguar supercomputer, sustaining 220 teraflops. High-performance simulation capabilities have been implemented by SCEC in the CyberShake hazard model for the Los Angeles region. CyberShake computes over 400,000 earthquake simulations, managed through a scientific workflow system, to represent the probabilistic seismic hazard at a particular site up to seismic frequencies of 0.3 Hz. CyberShake shows substantial differences with conventional probabilistic seismic hazard analysis based on empirical ground-motion prediction. At the probability levels appropriate for long-term forecasting, these differences are most significant (and worrisome) in sedimentary basins, where the population is densest and the regional seismic risk is concentrated. The higher basin amplification obtained by CyberShake is due to the strong coupling between rupture directivity and basin-mode excitation. The simulations show that this coupling is enhanced by the tectonic branching structure of the San

  4. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  5. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-26

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years.

  6. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  7. Large earthquakes and creeping faults

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.

    2017-03-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  8. Triggering of volcanic eruptions by large earthquakes

    NASA Astrophysics Data System (ADS)

    Nishimura, Takeshi

    2017-08-01

    When a large earthquake occurs near an active volcano, there is often concern that volcanic eruptions may be triggered by the earthquake. In this study, recently accumulated, reliable data were analyzed to quantitatively evaluate the probability of the occurrence of new eruptions of volcanoes located near the epicenters of large earthquakes. For volcanoes located within 200 km of large earthquakes of magnitude 7.5 or greater, the eruption occurrence probability increases by approximately 50% for 5 years after the earthquake origin time. However, no significant increase in the occurrence probability of new eruptions was observed at distant volcanoes or for smaller earthquakes. The present results strongly suggest that new eruptions are likely triggered by static stress changes and/or strong ground motions caused by nearby large earthquakes. This is not similar to the previously presented evidence that volcanic earthquakes at distant volcanoes are remotely triggered by surface waves generated by large earthquakes.

  9. Quantifying Near-Field Deformation of Large Magnitude Strike-Slip Earthquakes using Optical Image Correlation: Implications for Empirical Earthquake Scaling Laws and Safeguarding the Built Environment

    NASA Astrophysics Data System (ADS)

    Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.

    2016-12-01

    Measurements of co-seismic deformation from surface rupturing events are an important source of information for faulting mechanics and seismic hazard analysis. However, direct measurements of the near-field surface deformation pattern have proven difficult. Traditional field surveys typically cannot observe the diffuse and subtle inelastic strain accommodated over wide fault-zones, while InSAR data typically decorrelates close to the surface rupture due to high phase gradients leaving 1-2 km wide gaps of data. Using sub-pixel, optical image correlation of pre- and post-event air photos, we quantify the near-field, surface deformation pattern of the 1992 Mw= 7.3 Landers and 1999 Mw = 7.1 Hector Mine earthquakes. This technique allows spatially complete measurement of the surface co-seismic slip along the entire surface rupture, as well as the magnitude and width of distributed deformation. For both events we find our displacement measurements are systematically larger than those from field surveys, indicating the presence of significant distributed, `off-fault' deformation. Here we show that the Landers and Hector Mine earthquakes accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean shear width of 154 m and 121 m, respectively, with significant spatial variability. We also find positive, yet weak correlations of the magnitude of distributed deformation with the type of near-surface lithology and degree of macroscopic fault zone complexity. We envision additional measurements of future ruptures will better constrain what physical properties of the surface rupture are important controls on the distribution of strain, necessary in order to reliably estimate the amount of expected distributed shear along a given fault segment. Our results have basic implications for the accuracy of empirical scaling relations of earthquake surface ruptures derived from field measurements, understanding apparent discrepancies

  10. Large-scale organization of metamorphic carbon dioxide discharge in the Nepal Himalayas: an evidence for earthquake induced permeability changes?

    NASA Astrophysics Data System (ADS)

    Girault, F.; Perrier, F.; France-Lanord, C.; Bhattarai, M.; Koirala, B. P.; Bollinger, L.; Sapkota, S. N.

    2011-12-01

    Gaseous carbon dioxide discharge has been investigated at 11 locations from Mid-Western to Central Nepal along the belt of microseismic activity. Along this orogen arc, gas fluxes reach particularly large values in the Upper Trisuli Valley (Central Nepal), with CO2 and radon fluxes larger than 100 kg m-2 d-1 and 12 Bq m-2 s-1, respectively. Integrated CO2 discharge varies from values of the order of 1 Ton yr-1, or smaller, at two locations in Dolpo (Mid-Western Nepal) and three locations in Western Nepal between Dhaulagiri and Eastern Annapurna range, to values ranging from 500 to 2500 Ton yr-1 at three locations in the Upper Trisuli Valley. No gas discharge could be detected further east till Kodari in the Bhote Kosi valley. CO2 discharge, thus, appears organized coherently in sites separated by more than 10 km in a given region. High discharge appears restricted between the Marsyangdi valley, East of Annapurna, and the Upper Trisuli valley, and is characterized by isotopic anomalies delta13C of CO2 larger than -4 0/00, systematically associated with the presence of hot springs having isotopic anomalies delta13C {(DIC)} of CO2 larger than +5 0/00. This segment of the Himalayan arc is also characterized by moderate microseismic activity, and coincides with a segment of the arc, bordered to the east by the edge of the rupture of the 1505 earthquake and to the west by the edge of the rupture of the 1833 earthquake, which, thus, represents a segment where no large earthquake has been observed at least since 1800. CO2 discharge is also associated with radon-222 discharge, with a relationship depending on the site. High radon discharge is observed in the presence of weak CO2 discharge, when the CO2 originates from the degassing of a radon-rich hot spring, or in the presence of near-surface anomalous radium-226 content. High radon flux associated with high CO2 flux can result from degassing from a thermal spring with high flow rate, but can also appear in the absence of

  11. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  12. Induced earthquake magnitudes are as large as (statistically) expected

    USGS Publications Warehouse

    Van Der Elst, Nicholas; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas; Hosseini, S. Mehran

    2016-01-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  13. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  14. Large Rock Slope Failures Induced by Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Aydan, Ö.

    2016-06-01

    Recent earthquakes caused many large-scale rock slope failures. The scale and impact of rock slope failures are very large, and the form of failure differs depending upon the geological structures of slopes. First, the author briefly describes some model experiments to investigate the effects of shaking or faulting due to earthquakes on rock slopes. Then, fundamental characteristics of the rock slope failures induced by the earthquakes are described and evaluated according to some empirical and theoretical models. Furthermore, the observations for slope failures in relation to earthquake magnitude and epicenter or hypocenter distance were compared with several empirical relations available in the literature. Some of major rock slope failures induced by earthquakes are selected, and the post-failure motions are simulated and compared with observations. In addition, the effects of tsunamis on rock slopes in view of observations in the reconnaissances of the recent mega-earthquakes are explained and are discussed.

  15. S-net project: Construction of large scale seafloor observatory network for tsunamis and earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Kanazawa, T.; Uehira, K.; Shimbo, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Shinohara, M.; Yamada, T.

    2016-12-01

    National Research Institute for Earth Science and Disaster Resilience ( NIED ) has launched the project of constructing an observatory network for tsunamis and earthquakes on the seafloor. The observatory network was named "S-net, Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench". The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,700 km. The S-net system extends along Kuril and Japan trenches around Japan islands from north to south covering the area between southeast off island of Hokkaido and off the Boso Peninsula, Chiba Prefecture. The project has been financially supported by MEXT Japan. An observatory package is 34cm in diameter and 226cm long. Each observatory equips two units of a high sensitive water-depth sensor as a tsunami meter and four sets of three-component seismometers. The water-depth sensor has measurement resolution of sub-centimeter level. Combination of multiple seismometers secures wide dynamic range and robustness of the observation that are needed for early earthquake warning. The S-net is composed of six segment networks that consists of about 25 observatories and 800-1,600km length submarine optical cable. Five of six segment networks except the one covering the outer rise area of the Japan Trench has been already installed. The data from the observatories on those five segment networks are being transferred to the data center at NIED on a real-time basis, and then verification of data integrity are being carried out at the present moment. Installation of the last segment network of the S-net, that is, the outer rise one is scheduled to be finished within FY2016. Full-scale operation of the S-net will start at FY2017. We will report construction and operation of the S-net submarine cable system as well as the outline of the obtained data in this presentation.

  16. Patterns of seismic activity preceding large earthquakes

    NASA Technical Reports Server (NTRS)

    Shaw, Bruce E.; Carlson, J. M.; Langer, J. S.

    1992-01-01

    A mechanical model of seismic faults is employed to investigate the seismic activities that occur prior to major events. The block-and-spring model dynamically generates a statistical distribution of smaller slipping events that precede large events, and the results satisfy the Gutenberg-Richter law. The scaling behavior during a loading cycle suggests small but systematic variations in space and time with maximum activity acceleration near the future epicenter. Activity patterns inferred from data on seismicity in California demonstrate a regional aspect; increased activity in certain areas are found to precede major earthquake events. One example is given regarding the Loma Prieta earthquake of 1989 which is located near a fault section associated with increased activity levels.

  17. Patterns of seismic activity preceding large earthquakes

    NASA Technical Reports Server (NTRS)

    Shaw, Bruce E.; Carlson, J. M.; Langer, J. S.

    1992-01-01

    A mechanical model of seismic faults is employed to investigate the seismic activities that occur prior to major events. The block-and-spring model dynamically generates a statistical distribution of smaller slipping events that precede large events, and the results satisfy the Gutenberg-Richter law. The scaling behavior during a loading cycle suggests small but systematic variations in space and time with maximum activity acceleration near the future epicenter. Activity patterns inferred from data on seismicity in California demonstrate a regional aspect; increased activity in certain areas are found to precede major earthquake events. One example is given regarding the Loma Prieta earthquake of 1989 which is located near a fault section associated with increased activity levels.

  18. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  19. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  20. Tectonically Induced Anomalies Without Large Earthquake Occurrences

    NASA Astrophysics Data System (ADS)

    Shi, Zheming; Wang, Guangcai; Liu, Chenglong; Che, Yongtai

    2017-06-01

    In this study, we documented a case involving large-scale macroscopic anomalies in the Xichang area, southwestern Sichuan Province, China, from May to June of 2002, after which no major earthquake occurred. During our field survey in 2002, we found that the timing of the high-frequency occurrence of groundwater anomalies was in good agreement with those of animal anomalies. Spatially, the groundwater and animal anomalies were distributed along the Anninghe-Zemuhe fault zone. Furthermore, the groundwater level was elevated in the northwest part of the Zemuhe fault and depressed in the southeast part of the Zemuhe fault zone, with a border somewhere between Puge and Ningnan Counties. Combined with microscopic groundwater, geodetic and seismic activity data, we infer that the anomalies in the Xichang area were the result of increasing tectonic activity in the Sichuan-Yunnan block. In addition, groundwater data may be used as a good indicator of tectonic activity. This case tells us that there is no direct relationship between an earthquake and these anomalies. In most cases, the vast majority of the anomalies, including microscopic and macroscopic anomalies, are caused by tectonic activity. That is, these anomalies could occur under the effects of tectonic activity, but they do not necessarily relate to the occurrence of earthquakes.

  1. Model for repetitive cycles of large earthquakes

    SciTech Connect

    Newman, W.I.; Knopoff, L.

    1983-04-01

    The theory of the fusion of small cracks into large ones reproduces certain features also observed in the clustering of earthquake sequences. By modifying our earlier model to take into account the stress release associated with the occurrence of large earthquakes, we obtain repetitive periodic cycles of large earthquakes. A preliminary conclusion is that a combination of the stress release or elastic rebound mechanism plus time delays in the fusion process are sufficient to destabilize the crack populations and, ultimately, give rise to repetitive episodes of seismicity.

  2. Unsupervised polarimetric synthetic aperture radar classification of large-scale landslides caused by Wenchuan earthquake in hue-saturation-intensity color space

    NASA Astrophysics Data System (ADS)

    Li, Ning; Wang, Robert; Deng, Yunkai; Liu, Yabo; Li, Bochen; Wang, Chunle; Balz, Timo

    2014-01-01

    A simple and effective approach for unsupervised classification of large-scale landslides caused by the Wenchuan earthquake is developed. The data sets used were obtained by a high-resolution fully polarimetric airborne synthetic aperture radar system working at X-band. In the proposed approach, Pauli decomposition false-color RGB imagery is first transformed to the hue-saturation-intensity (HSI) color space. Then, a good combination of k-means clustering and HSI imagery in different channels is used stage-by-stage for automatic landslides extraction. Two typical case studies are presented to evaluate the feasibility of the proposed scheme. Our approach is an important contribution to the rapid assessment of landslide hazards.

  3. Scaling in geology: landforms and earthquakes.

    PubMed

    Turcotte, D L

    1995-07-18

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior.

  4. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  5. Small and large earthquakes: evidence for a different rupture beginning

    NASA Astrophysics Data System (ADS)

    Colombelli, Simona; Zollo, Aldo; Festa, Gaetano; Picozzi, Matteo

    2014-05-01

    For the real-time magnitude estimate two Early Warning (EW) parameters are usually measured within 3 seconds of P-wave signal. These are the initial peak displacement (Pd) and the average period (τc). The scaling laws between EW parameters and magnitude are robust and effective up to magnitude 6.5-7 but a well known saturation problem for both parameters is evident for larger earthquakes. The saturation is likely due to the source finiteness so that only a few seconds of the P-wave cannot capture the entire rupture process of a large event. Here we propose an evolutionary approach for the magnitude estimate, based on the progressive expansion of the P-wave time window, until the expected arrival of the S-waves. The methodology has already been applied to the 2011, Mw 9.0 Tohoku-Oki earthquake records and showed that a minimum time window of 25-30 seconds is indeed needed to get stable magnitude estimate for a magnitude M ≥ 8.5 earthquake. Here we extend the analysis to a larger data set of Japanese earthquakes with magnitude between 4 and 9, using a high number of records per earthquake and spanning wide distance and azimuth ranges. We analyze the relationship between the time evolution of EW parameters and the earthquake magnitude itself with the purpose to understand the evolution of these parameters during the rupture process and to investigate a possible different scaling for both small and large events. We show that the initial increase of P-wave motion is more rapid for small earthquakes that for larger ones, thus implying a longer and wider nucleation phase for large events. Our results indicate that earthquakes breaking in a region with a large critical slip displacement value have a larger probability to grow into a large size rupture than those originating in a region with a smaller critical displacement value.

  6. Multidimensional scaling visualization of earthquake phenomena

    NASA Astrophysics Data System (ADS)

    Lopes, António M.; Machado, J. A. Tenreiro; Pinto, C. M. A.; Galhano, A. M. S. F.

    2014-01-01

    Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn-Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.

  7. Sea-level changes before large earthquakes

    USGS Publications Warehouse

    Wyss, M.

    1978-01-01

    Changes in sea level have long been used as a measure of local uplift and subsidence associated with large earthquakes. For instance, in 1835, the British naturalist Charles Darwin observed that sea level dropped by 2.7 meters during the large earthquake in Concepcion, CHile. From this piece of evidence and the terraces along the beach that he saw, Darwin concluded that the Andes had grown to their present height through earthquakes. Much more recently, George Plafker and James C. Savage of the U.S Geological Survey have shown, from barnacle lines, that the great 1960 Chile and the 1964 Alaska earthquakes caused several meters of vertical displacement of the shoreline. 

  8. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  9. The 2016 Kumamoto, Japan, earthquakes and lessons learned for large earthquakes in urban areas

    NASA Astrophysics Data System (ADS)

    Hirata, Naoshi; Kato, Aitaro; Nakamura, Kouji; Hiyama, Yohei

    2017-04-01

    A series of devastating earthquakes hit the Kumamoto districts in Kyushu, Japan, in April 2016. A M6.5 event occurred at 21:26 on April 14th (JST) and, 28 hours later, a M7.3 event occurred at 01:25 on April 17th (JST) at almost the same location at a depth of 10 km. Both earthquakes were felt at the town of Mashiki with a seismic intensity of 7 according to the Japan Meteorological Agency (JMA) scale. The intensity of 7 is the highest level in the JMA scale. Very strong accelerations were observed by the M6.5 event with 1,580 gal at KiKnet Mashiki station and 1,791 gal for the M7.3 event at Ohtsu City station. As a result, more than 8,000 houses totally collapsed, 26,000 were heavily damaged, and 120,000 were partially damaged. More than 170 people were killed by the two earthquakes. The important lesson from the Kumamoto earthquake is that very strong ground motions may hit within a few days after a first large event. This can have serious impacts to houses already damaged by the first large earthquake. In the 2016 Kumamoto sequence, there were also many strong aftershocks including M5.8-5.9 events until April 18th. More than 180,000 people had to take shelter because of ongoing strong aftershocks. We discuss both the natural and human aspects of the Kumamoto earthquake disaster caused by inland shallow large earthquakes. We will report on the lessons learned for large earthquakes hitting the metropolitan area of Tokyo, Japan.

  10. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn; Borchardt, Glenn; Hirschfeld, Sue E.; Lienkaemper, James J.; McClellan, Patrick H.; Williams, Patrick L.; Wong, Ivan G.

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  11. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-01-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment (M 0) and rupture area (A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0-A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0-A scaling relations for strike-slip earthquakes.

  12. Afterslip and Viscoelastic Relaxation Model Inferred from the Large Scale Postseismic Deformation Following the 2010 Mw 8,8 Maule Earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Vigny, C.; Klein, E.; Fleitout, L.; Garaud, J. D.

    2015-12-01

    Postseismic deformation following the large subduction earthquake of Maule (Chile, Mw8.8, February 27th 2010) have been closely monitored with GPS from 70 km up to 2000 km away from the trench. They exhibit a behavior generally similar to that already observed after the Aceh and Tohoku-Oki earthquakes. Vertical uplift is observed on the volcanic arc and a moderate large scale subsidence is associated with sizeable horizontal deformation in the far-field (500-2000km from the trench). In addition, near-field data (70-200km from the trench) feature a rather complex deformation pattern. A 3D FE code (Zebulon Zset) is used to relate these deformation to slip on the plate interface and relaxation in the mantle. The mesh features a spherical shell-portion from the core-mantle boundary to the Earth's surface, extending over more than 60 degrees in latitude and longitude. The overridding and subducting plates are elastic, and the asthenosphere is viscoelastic. A viscoelastic Low Viscosity Channel (LVC) is also introduced along the plate interface. Both the asthenosphere and the channel feature Burger's rheologies and we invert for their mechanical properties and geometrical characteristics simultaneously with the afterslip distribution. The horizontal deformation pattern requires relaxation both in i) the asthenosphere extending down to 270km, with a 'long-term' viscosity of the order of 4.8.1018 Pa.s and ii) in the channel, that has to extend from depth of 50 to 150 km with viscosities slightly below 1018 Pa.s, to fit well the vertical velocity pattern (intense and quick uplift over the Cordillera). Aseismic slip on the plate interface, at shallow depth, is necessary to explain all the characteristics of the near-field displacements. We then detect two main patches of high slip, one updip of the coseismic slip distribution in the northernmost part of the rupture zone, and the other one downdip, at the latitude of Constitucion (35°S). We finally study the temporel

  13. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  14. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  15. Scaling of seismic memory with earthquake size

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene

    2012-07-01

    It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.

  16. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  17. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  18. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  19. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  20. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  1. Mw Dependence of Ionospheric Electron Enhancement Immediately Before Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2015-12-01

    Ionospheric electrons were reported to have increased ~40 minutes before the 2011 Tohoku-oki (Mw9.0) earthquake, Japan, by observing total electron content (TEC) with GNSS receivers [e.g. Heki and Enomoto, 2013]. They further demonstrated that similar TEC enhancements preceded all the recent earthquakes with Mw of 8.5 or more. Their reality has been repeatedly questioned due mainly to the ambiguity in the derivation of the reference TEC curves from which anomalies are defined [e.g. Masci et al., 2015]. Here we propose a numerical approach, based on Akaike's Information Criterion, to detect positive breaks (sudden increase of TEC rate) in the vertical TEC time series without using reference curves. We demonstrate that such breaks are detected 20-80 minutes before the ten recent large earthquakes with Mw7.8-9.2. The amounts of breaks were found to depend on the background absolute VTEC and Mw, i.e. Break (TECU/h)=4.74Mw+0.13VTEC-39.86, with the standard deviation of ~1.2 TECU/h. We can convert this equation to Mw = (Break-0.13VTEC+39.86)/4.74, which can tell us the Mw of impending earthquakes with uncertainty of ~0.25. The precursor times were longer for larger earthquakes, ranging from ~80 minutes for the largest (2004 Sumatra-Andaman) to ~21 minutes for the smallest (2015 Nepal). The precursors of intraplate earthquakes (e.g. 2012 Indian Ocean) started significantly earlier than interplate ones. We performed the same analyses during periods without earthquakes, and found that positive breaks comparable to that before the 2011 Tohoku-oki earthquake occur once in 20 hours. They originate from small amplitude Large-scale Travelling Ionospheric Disturbances (LSTID), which are excited in the auroral oval and move southward with the velocity of internal gravity waves. This probability is small enough to rule out the fortuity of these breaks, but large enough to make it a challenge to apply preseismic TEC enhancements for short-term earthquake prediction.

  2. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  3. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  4. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  5. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  6. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  7. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  8. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Razafindrakoto, Hoby N. T.; Mai, P. Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran K. S.

    2015-07-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  9. Web-Based Interrogation of Large-Scale Geophysical Data Sets and Clustering Analysis of Many Earthquake Events From Desktop and Handheld Computers

    NASA Astrophysics Data System (ADS)

    Garbow, Z. A.; Erlebacher, G.; Yuen, D. A.; Sevre, E. O.; Nagle, A. R.; Kaneko, J. Y.

    2002-12-01

    The size of datasets in the geosciences is growing at a tremendous pace due to inexpensive memory, increasingly large storage space, fast processors and constantly improving data-collection instruments. However, the available bandwidth increases at a much slower rate and consequently cannot keep up with the size of the datasets themselves. Coupled with our need to explore the large datasets in a simplified point of view, the current approach of transferring full datasets from one machine to another in order to analyze it is fast becoming impractical and obsolete. We have previously developed a web-based interactive data interrogation system that allows users to remotely analyze geophysical data over the Internet using a client-server paradigm (Garbow et al., Electronic Geosciences, Vol. 6, 2001). To further our idea of interactive data extraction we have used this interrogative system to explore both high-resolution mantle convection data and earthquake clusters involving up to tens of thousands of earthquakes. In addition, we have ported this system to work from handheld devices via wireless connections. Our system uses a combination of Java, Python, and C for running remotely from a desktop computer, laptop, or even a handheld device, while incorporating the power and memory capacity of a large workstation server. Because of the limitations of the current generation of handheld devices in terms of processing power, screen size, memory and storage, they have not yet become practical vehicles for useful scientific work. Our aim is to successfully overcome the limitations of handheld devices to allow them in the near future to be used as portable scientific research laboratories, particularly with the new, more powerful processors (e.g. Transmeta Crusoe) just over the horizon.

  10. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  11. Earthquake nucleation scaling from laboratory to Earth

    NASA Astrophysics Data System (ADS)

    Nielsen, Stefan; Kaneko, Yoshihiro; Harbord, Chris; Latour, Soumaya; Carpenter, Brett; De Paola, Nicola

    2017-04-01

    Migrating foreshock sequences along major plate boundaries and geodetic transient anomalies have been interpreted as indicators of aseismic creep for days to months prior to the initiation of earthquakes. In other cases no significant precursory activity is detected, even at well-instrumented sites, suggesting an abrupt rupture initiation. Both the nucleation size (e.g. Rice and Ruina's hRR∗ or Andrew's Lc) or its duration can be highly variable. Here we analyse the scaling of nucleation and the controls on stick-slip instability based on a review of recent laboratory experimental results. (1) Rupture propagation experiments on smooth model faults show a two-phase nucleation process with variable size and duration depending on loading rate, normal stress and frictional parameters. These results can be reproduced by numerical models incorporating rate-and-state friction laws, and can be up-scaled to simulate the nucleation process of crustal earthquakes. We used frictional properties from samples of the San Andreas Fault Observatory at Depth (SAFOD) to model the nucleation phase for magnitude˜2 repeating earthquakes at a 2.8-km depth. We predict that the nucleation could be detectable a few hours before the earthquake by strain measurements in the existing borehole. (2) An alternative set of experiments on rough model faults, instead, shows that initiation of rupture is primarily controlled by the size and the amount of heterogeneity induced by the fault topography and its interplay with the normal stress. In this case the onset of stick-slip is not predicted by the stability analysis within the rate-and-state framework, but rather by energy considerations more akin to Griffith's criterion in the presence of flaws. Although these two sets of experimental observations and their modelling are difficult to reconcile, they may be representative end members of earthquake faults with different degrees of heterogeneity.

  12. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  13. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  14. Linking Oceanic Tsunamis and Geodetic Gravity Changes of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Fu, Yuning; Song, Y. Tony; Gross, Richard S.

    2017-08-01

    Large earthquakes at subduction zones usually generate tsunamis and coseismic gravity changes. These two independent oceanic and geodetic signatures of earthquakes can be observed individually by modern geophysical observational networks. The Gravity Recovery and Climate Experiment twin satellites can detect gravity changes induced by large earthquakes, while altimetry satellites and Deep-Ocean Assessment and Reporting of Tsunamis buoys can observe resultant tsunamis. In this study, we introduce a method to connect the oceanic tsunami measurements with the geodetic gravity observations, and apply it to the 2004 Sumatra Mw 9.2 earthquake, the 2010 Maule Mw 8.8 earthquake and the 2011 Tohoku Mw 9.0 earthquake. Our results indicate consistent agreement between these two independent measurements. Since seafloor displacement is still the largest puzzle in assessing tsunami hazards and its formation mechanism, our study demonstrates a new approach to utilizing these two kinds of measurements for better understanding of large earthquakes and tsunamis.

  15. Linking Oceanic Tsunamis and Geodetic Gravity Changes of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Fu, Yuning; Song, Y. Tony; Gross, Richard S.

    2017-03-01

    Large earthquakes at subduction zones usually generate tsunamis and coseismic gravity changes. These two independent oceanic and geodetic signatures of earthquakes can be observed individually by modern geophysical observational networks. The Gravity Recovery and Climate Experiment twin satellites can detect gravity changes induced by large earthquakes, while altimetry satellites and Deep-Ocean Assessment and Reporting of Tsunamis buoys can observe resultant tsunamis. In this study, we introduce a method to connect the oceanic tsunami measurements with the geodetic gravity observations, and apply it to the 2004 Sumatra Mw 9.2 earthquake, the 2010 Maule Mw 8.8 earthquake and the 2011 Tohoku Mw 9.0 earthquake. Our results indicate consistent agreement between these two independent measurements. Since seafloor displacement is still the largest puzzle in assessing tsunami hazards and its formation mechanism, our study demonstrates a new approach to utilizing these two kinds of measurements for better understanding of large earthquakes and tsunamis.

  16. Large Earthquake Potential in the Southeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Mora-Paez, H.; Bilham, R. G.; Lafemina, P.; Mattioli, G. S.; Molnar, P. H.; Audemard, F. A.; Perez, O. J.

    2015-12-01

    The axis of rotation describing relative motion of the Caribbean plate with respect to South America lies in Canada near Hudson's Bay, such that the Caribbean plate moves nearly due east relative to South America [DeMets et al. 2010]. The plate motion is absorbed largely by pure strike slip motion along the El Pilar Fault in northeastern Venezuela, but in northwestern Venezuela and northeastern Colombia, the relative motion is distributed over a wide zone that extends from offshore to the northeasterly trending Mérida Andes, with the resolved component of convergence between the Caribbean and South American plates estimated at ~10 mm/yr. Recent densification of GPS networks through COLOVEN and COCONet including access to private GPS data maintained by Colombia and Venezuela allowed the development of a new GPS velocity field. The velocity field, processed with JPL's GOA 6.2, JPL non-fiducial final orbit and clock products and VMF tropospheric products, includes over 120 continuous and campaign stations. This new velocity field along with enhanced seismic reflection profiles, and earthquake location analysis strongly suggest the existence of an active oblique subduction zone. We have also been able to use broadband data from Venezuela to search slow-slip events as an indicator of an active subduction zone. There are caveats to this hypothesis, however, including the absence of volcanism that is typically concurrent with active subduction zones and a weak historical record of great earthquakes. A single tsunami deposit dated at 1500 years before present has been identified on the southeast Yucatan peninsula. Our simulations indicate its probable origin is within our study area. We present a new GPS-derived velocity field, which has been used to improve a regional block model [based on Mora and LaFemina, 2009-2012] and discuss the earthquake and tsunami hazards implied by this model. Based on the new geodetic constraints and our updated block model, if part of the

  17. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  18. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  19. Time-Dependent Earthquake Forecasts on a Global Scale

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Graves, W. R.

    2014-12-01

    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged ("Poisson") forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. An example is the Working Group on California Earthquake Probabilities, which has been using fault-based models to compute conditional probabilities in California since 1988. An example of a forecast is the Epidemic-Type Aftershock Sequence (ETAS), which is based on the Gutenberg-Richter (GR) magnitude-frequency law, the Omori aftershock law, and Poisson statistics. The method discussed in this talk is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as "markers" for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous research, we used conditional Weibull statistics to convert event counts into a temporal probability for a given fixed region. In the present paper, we move belyond a fixed region, and develop a method to compute these Natural Time Weibull (NTW) forecasts on a global scale, using an internally consistent method, in regions of arbitrary shape and size. We develop and implement these methods on a modern web-service computing platform, which can be found at www.openhazards.com and www.quakesim.org. We also discuss constraints on the User Interface (UI) that follow from practical considerations of site usability.

  20. Teleseismic search for slow precursors to large earthquakes.

    PubMed

    Ihmlé, P F; Jordan, T H

    1994-12-02

    Some large earthquakes display low-frequency seismic anomalies that are best explained by episodes of slow, smooth deformation immediately before their high-frequency origin times. Analysis of the low-frequency spectra of 107 shallow-focus earthquakes revealed 20 events that had slow precursors (95 percent confidence level); 19 were slow earthquakes associated with the ocean ridge-transform system, and 1 was a slow earthquake on an intracontinental transform fault in the East African Rift system. These anomalous earthquakes appear to be compound events, each comprising one or more ordinary (fast) ruptures in the shallow seismogenic zone initiated by a precursory slow event in the adjacent or subjacent lithosphere.

  1. Size dependent rupture growth at the scale of real earthquake

    NASA Astrophysics Data System (ADS)

    Colombelli, Simona; Festa, Gaetano; Zollo, Aldo

    2017-04-01

    When an earthquake starts, the rupture process may evolve in a variety of ways, resulting in the occurrence of different magnitude earthquakes, with variable areal extent and slip, and this may produce an unpredictable damage distribution around the fault zone. The cause of the observed diversity of the rupture process evolution is unknown. There are studies supporting the idea that all earthquakes arise in the same way, while the mechanical conditions of the fault zone may determine the propagation and generation of small or large earthquakes. Other studies show that small and large earthquakes are different from the initial stage of the rupture beginning. Among them, Colombelli et al. (2014) observed that the initial slope of the P-wave peak displacement could be a discriminant for the final earthquake size, so that small and large ruptures show a different behavior in their initial stage. In this work we perform a detailed analysis of the time evolution of the P-wave peak amplitude for a set of few, co-located events, during the 2008, Iwate-Miyagi (Japan) earthquake sequence. The events have magnitude between 3.2 and 7.2 and their epicentral coordinates vary in a narrow range, with a maximum distance among the epicenters of about 15 km. After applying a refined technique for data processing, we measured the initial Peak Displacement (Pd) as the absolute value of the vertical component of displacement records, starting from the P-wave arrival time and progressively expanding the time window. For each event, we corrected the observed Pd values at different stations for the distance effect and computed the average logarithm of Pd as a function of time. The overall shape of the Pd curves (in log-lin scale) is consistent with what has been previously observed for a larger dataset by Colombelli et al. (2014). The initial amplitude begins with small values and then increases with time, until a plateau level is reached. However, we observed essential differences in the

  2. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  3. Absence of remotely triggered large earthquakes beyond the mainshock region

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2011-01-01

    Large earthquakes are known to trigger earthquakes elsewhere. Damaging large aftershocks occur close to the mainshock and microearthquakes are triggered by passing seismic waves at significant distances from the mainshock. It is unclear, however, whether bigger, more damaging earthquakes are routinely triggered at distances far from the mainshock, heightening the global seismic hazard after every large earthquake. Here we assemble a catalogue of all possible earthquakes greater than M 5 that might have been triggered by every M 7 or larger mainshock during the past 30 years. We compare the timing of earthquakes greater than M 5 with the temporal and spatial passage of surface waves generated by large earthquakes using a complete worldwide catalogue. Whereas small earthquakes are triggered immediately during the passage of surface waves at all spatial ranges, we find no significant temporal association between surface-wave arrivals and larger earthquakes. We observe a significant increase in the rate of seismic activity at distances confined to within two to three rupture lengths of the mainshock. Thus, we conclude that the regional hazard of larger earthquakes is increased after a mainshock, but the global hazard is not.

  4. A physical interpretation of field observations that precede large earthquakes

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sacks, S. I.; Rydelek, P. A.; Smith, D. E.; Takanami, T.

    2016-12-01

    A cellular automaton model of earthquake faulting adopting Coulomb's failure criterion developed by Sacks and Rydelek (1995) successfully generates catalogs that satisfy Gutenberg-Richter's Law, the observed decreases in b-value before large events, as well as the propagation of the rupture front. Model runs indicate that redistributed stresses remain on the ruptured area and that some slips recur on the same cells forming dynamic asperities of high slips. We found that the observed magnitude-dependent seismicity quiescence can be explained by the introduction of dilatancy hardening into the model. Only a few % of the total number of model cells need be strengthened by a small amount. This indicates the difficulty of detecting their presence using seismic imaging. However, the observed long term ( years) temporal changes in seismicity, gravity, and electrical resistivity may be causally linked to the volume change from microfractures and the effect of pore pressure changes on fault strength. Our model predicts the process occurs at points sparcely distributed. Water migrations into unfilled microfractures act to lower the strength, thus promoting the occurrence of seismic slips. These slips may expel water that will influence aquifer levels, which may be observed at regional water wells. Drilling in seismic fault zones, such as at the 1995 Kobe earthquake fault, has revealed that the permeability on the main fault plane was many orders of magnitude higher than the surrounding rocks. We suggest the same water migration process at highly permeable zone can occur at short time scale to grow into a large magnitude slip or may manifest as a slow slip. The aftershock sequence of the 1978 Izu-Oshima earthquake shows that it overlaps the inferred slow slip on the fault following the main shock, thus suggesting that a fault can slip in various ways in the same time interval. We propose new observations that are sensitive to crustal water migration such as vertical

  5. Surface slip during large Owens Valley earthquakes

    USGS Publications Warehouse

    Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  6. Time-predictable recurrence model for large earthquakes

    SciTech Connect

    Shimazaki, K.; Nakata, T.

    1980-04-01

    We present historical and geomorphological evidence of a regularity in earthquake recurrence at three different sites of plate convergence around the Japan arcs. The regularity shows that the larger an earthquake is, the longer is the following quiet period. In other words, the time interval between two successive large earthquakes is approximately proportional to the amount of coseismic displacement of the preceding earthquake and not of the following earthquake. The regularity enables us, in principle, to predict the approximate occurrence time of earthquakes. The data set includes 1) a historical document describing repeated measurements of water depth at Murotsu near the focal region of Nankaido earthquakes, 2) precise levelling and /sup 14/C dating of Holocene uplifted terraces in the southern boso peninsula facing the Sagami trough, and 3) similar geomorphological data on exposed Holocene coral reefs in Kikai Island along the Ryukyu arc.

  7. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-04

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes.

  8. Examining Earthquake Scaling Via Event Ratio Levels

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Yoo, S.; Mayeda, K. M.; Gok, R.

    2013-12-01

    A challenge with using corner frequency to interpret stress parameter scaling is that stress drop and apparent stress are related to the cube of the corner frequency. In practice this leads to high levels of uncertainty in measured stress from since the uncertainty in measuring the corner frequency is cubed to determine uncertainty in the stress parameters. We develop a new approach using the low and high frequency levels of spectral ratios between two closely located events recorded at the same stations. This approach has a number of advantages over more traditional corner frequency fitting, either in spectral ratios or individual spectra. First, if the bandwidth of the spectral ratio is sufficient, the levels can be measured at many individual frequency points and averaged, reducing the measurement error. Second the apparent stress (and stress drop) are related to the high frequency level to the 3/2 power so the measurement uncertainty is not as amplified as when using the corner frequency. Finally, if the bandwidth is sufficiently broad to determine both the spectral ratio low and high frequency levels, the apparent stress (or stress drop) ratio can be determined without the need to use any other measurements (e.g., moment, fault area), which of course have their own measurement uncertainties. We will show a number examples taken from a wide variety of crustal earthquake sequences. Example of the sigmoid formed by a spectral ratio between two hypothetical events for two different cases of stress scaling using the models described in this paper. Event 1 is Mw 6.0 event and event 2 is an Mw 4.0 event. In the self-similar case both have an apparent stress of 3 MPa, in the non-self-similar case the large event apparent stress is 3 MPA and the smaller one is 1 MPa. Note that ratio reaches different constant levels. The low frequency level (LVL) is the ratio of the moments and high frequency level (HFL) depends on the stress parameters. In this paper we derive the

  9. Large scale scientific computing

    SciTech Connect

    Deuflhard, P. ); Engquist, B. )

    1987-01-01

    This book presents papers on large scale scientific computing. It includes: Initial value problems of ODE's and parabolic PDE's; Boundary value problems of ODE's and elliptic PDE's; Hyperbolic PDE's; Inverse problems; Optimization and optimal control problems; and Algorithm adaptation on supercomputers.

  10. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  11. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  12. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  13. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  14. Can observations of earthquake scaling constrain slip weakening?

    NASA Astrophysics Data System (ADS)

    Abercrombie, Rachel E.; Rice, James R.

    2005-08-01

    We use observations of earthquake source parameters over a wide magnitude range (MW~ 0-7) to place constraints on constitutive fault weakening. The data suggest a scale dependence of apparent stress and stress drop; both may increase slightly with earthquake size. We show that this scale dependence need not imply any difference in fault zone properties for different sized earthquakes. We select 30 earthquakes well-recorded at 2.5 km depth at Cajon Pass, California. We use individual and empirical Green's function spectral analysis to improve the resolution of source parameters, including static stress drop (Δσ) and total slip (S). We also measure radiated energy ES. We compare the Cajon Pass results with those from larger California earthquakes including aftershocks of the 1994 Northridge earthquake and confirm the results of Abercrombie (1995): μES/M0<<Δσ (where μ= rigidity) and both ES/M0 and Δσ increase as M0 (and S) increases. Uncertainties remain large due to model assumptions and variations between possible models, and earthquake scale independence is possible within the resolution. Assuming that the average trends are real, we define a quantity G'= (Δσ- 2μES/M0)S/2 which is the total energy dissipation in friction and fracture minus σ1S, where σ1 is the final static stress. If σ1=σd, the dynamic shear strength during the last increments of seismic slip, then G'=G, the fracture energy in a slip-weakening interpretation of dissipation. We find that G' increases with S, from ~103 J m-2 at S= 1 mm (M1 earthquakes) to 106-107 J m-2 at S= 1 m (M6). We tentatively interpret these results within slip-weakening theory, assuming G'~G. We consider the common assumption of a linear decrease of strength from the yield stress (σp) with slip (s), up to a slip Dc. In this case, if either Dc, or more generally (σp-σd) Dc, increases with the final slip S we can match the observations, but this implies the unlikely result that the early weakening behaviour of

  15. Ultralow-Frequency Magnetic Fields Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Fraser-Smith, Antony C.

    2008-06-01

    The Great Alaska Earthquake (M 9.2) of 27 March 1964 was the largest earthquake ever to strike the United States in modern times and one of the largest ever recorded anywhere. Later that year, Moore [1964], in a surprisingly rarely cited paper, reported the occurrence of strong ultralow-frequency (ULF; <=10 hertz) magnetic field disturbances at Kodiak, Alaska, in the 1.2 hours before the earthquake. That report has since been followed by others [Fraser-Smith et al., 1990; Kopytenko et al., 1993; Hayakawa et al., 1996; see also Molchanov et al., 1992] similarly describing the occurrence of large-amplitude ULF magnetic field fluctuations before other large earthquakes (``large'' describes earthquakes with magnitudes M ~ 7 or greater). These reports involving four separate, large earthquakes were made by four different groups and the results were published in well-known, refereed scientific journals, so there is no doubt that there is evidence for the existence of comparatively large ULF magnetic field fluctuations preceding large earthquakes.

  16. A Large Scale Automatic Earthquake Location Catalog in the San Jacinto Fault Zone Area Using An Improved Shear-Wave Detection Algorithm

    NASA Astrophysics Data System (ADS)

    White, M. C. A.; Ross, Z.; Vernon, F.; Ben-Zion, Y.

    2015-12-01

    UC San Diego's ANZA network began archiving event-triggered data in 1982. As a result of improved recording technology, continuous waveform data archives are available starting in 1998. This continuous dataset, from 1998-present, represents a wealth of potential insight into spatio-temporal seismicity patterns, earthquake physics and mechanics of the San Jacinto Fault Zone. However, the volume of data renders manual analysis costly. In order to investigate the characteristics of the data in space and time, an automatic earthquake location catalog is needed. To this end, we apply standard earthquake signal processing techniques to the continuous data to detect first-arriving P-waves in combination with a recently developed S-wave detection algorithm. The resulting dataset of arrival time observations are processed using a grid association algorithm to produce initial absolute locations which are refined using a location inversion method that accounts for 3-D velocity heterogeneities. Precise relative locations are then derived from the refined absolute locations using the HypoDD double-difference algorithm. Moment magnitudes for the events are estimated from multi-taper spectral analysis. A >650% increase in the S:P pick ratio is achieved using the updated S-wave detection algorithm, when compared to the currently available catalog for the ANZA network. The increased number of S-wave observations leads to improved earthquake location accuracy and reliability (ie. less false event detections). Various aspects of spatio-temporal seismicity patterns and size distributions are investigated. Updated results will be presented at the meeting.

  17. Large earthquake rupture process variations on the Middle America megathrust

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  18. Large Scale Nonlinear Programming.

    DTIC Science & Technology

    1978-06-15

    KEY WORDS (Conhinu. as, t.n.t.. aid. if nic••iary aid ld.ntify by block n,a,b.r) L. In,~~~ IP!CIE LARGE SCALE OPTIMIZATION APPLICATIONS OF NONLINEAR ... NONLINEAR PROGRAMMING by Garth P. McCormick 1. Introduction The general mathematical programming ( optimization ) problem can be stated in the following form...because the difficulty in solving a general nonlinear optimization problem has a~ much to do with the nature of the functions involved as it does with the

  19. Modeling long-term hillslope denudation due to large earthquakes: Example from the Wenchuan 2008 earthquake, Longmen Shan, China

    NASA Astrophysics Data System (ADS)

    Gallen, S. F.; Clark, M. K.

    2013-12-01

    Most moderate to large earthquakes in high-relief terrain trigger landslides. Much research has focused on identifying hazards associated with hillslopes that are potentially seismically unstable. Recently, there has been greater recognition in the role of seismically-induced landsliding in drainage basin-to-orogen scale denudation, particularly over geologic timescales. The 2008 Mw 7.9 Wenchuan earthquake in the Longmen Shan Mountains in western Sichuan Province, China, provided unprecedented opportunity to collect geophysical and geomorphological data sets related to earthquake-driven landsliding. Based on landslide inventories, some suggest that this single event resulted in the displacement of hillslope mass equal or greater than that of the cosesimic displacement. The magnitude and availability of data for the Wenchuan earthquake make it an ideal testing ground for new mechanistic models related to earthquake-driven landsliding and landscape evolution. We develop and apply a mechanistic slope stability model that allows for estimates of the volume hillslope mass removal during an earthquake at the regional scale. In our model, slope performance and landslide potential during seismic accelerations are determined using the Newmark sliding block model. In Newmark analysis, surface displacement occurs when a critical or yield ground acceleration exceeds to the shear strength of the material. Site specific characteristics, such as local slope and rock-type, provide the basis to estimate static slope stability conditions and a given peak ground acceleration (PGA) scenario is used to model potential slope failures. Predicted spatial distributions, statistical characteristics and net volume of landslide mass were generally similar to measured inventories of coseismic landsliding. To extend the coseismic mass-wasting model to geological time-scales, we applied a simple stochastic model of earthquake nucleation and empirical PGA attenuation equations to simulate

  20. Detection capability of global earthquakes influenced by large intermediate-depth and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2011-12-01

    This study examined the detection capability of the global CMT catalogue immediately after a large intermediate-depth (70 < depth ≤ 300 km) or deep (300 km < depth) earthquake. Iwata [2008, GJI] have revealed that the detection capability is remarkably lower than ordinary one for several hours after the occurrence of a large shallow (depth ≤ 70 km) earthquake. Since the global CMT catalogue plays an important role in studies on global earthquake forecasting or seismicity pattern [e.g., Kagan and Jackson, 2010, Pageoph], the characteristic of the catalogue should be investigated carefully. We stacked global shallow earthquake sequences, which are taken from the global CMT catalogue from 1977 to 2010, after a large intermediate-depth or deep earthquake. Then, we utilized a statistical model representing an observed magnitude-frequency distribution of earthquakes [e.g., Ringdal, 1975, BSSA; Ogata and Katsura, 1993, GJI]. The applied model is a product of the Gutenberg-Richter law and a detection rate function q(M). Following previous studies, the cumulative distribution of the normal distribution was used as q(M). This model enables us to estimate μ, the magnitude where the detection rate of earthquake is 50 per cent. Finally, a Bayesian approach with a piecewise linear approximation [Iwata, 2008, GJI] was applied to this stacked data to estimate the temporal change of μ. Consequently, we found a significantly lowered detection capability after a intermediate-depth or deep earthquake of which magnitude is 6.5 or larger. The lowered detection capability lasts for several hours or one-half day. During this period of low detection capability, a few per cent of M ≥ 6.0 earthquakes or a few tens percent of M ≥ 5.5 earthquakes are undetected in the global CMT catalogue while the magnitude completeness threshold of the catalogue was estimated to be around 5.5 [e.g., Kagan, 2003, PEPI].

  1. Scaling of intraplate earthquake recurrence interval with fault length and implications for seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Marrett, Randall

    1994-12-01

    Consensus indicates that faults follow power-law scaling, although significant uncertainty remains about the values of important parameters. Combining these scaling relationships with power-law scaling relationships for earthquakes suggests that intraplate earthquake recurrence interval scales with fault length. Regional scaling data may be locally calibrated to yield a site-specific seismic hazard assessment tool. Scaling data from small faults (those that do not span the seismogenic layer) suggest that recurrence interval varies as a negative power of fault length. Due to uncertainties regarding the recently recognized changes in scaling for large earthquakes, it is unclear whether recurrence interval varies as a negative or positive power of fault length for large fauts (those that span the seismogenic layer). This question is of critical importance for seismic hazard assessment.

  2. Large Earthquakes Disrupt Groundwater System by Breaching Aquitards

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.; Manga, M.; Liao, X.; Wang, L. P.

    2016-12-01

    Changes of groundwater system by large earthquakes are widely recognized. Some changes have been attributed to increases in the vertical permeability but basic questions remain: How do increases in the vertical permeability occur? How frequent do they occur? How fast does the vertical permeability recover after the earthquake? Is there a quantitative measure for detecting the occurrence of aquitard breaching? Here we attempt to answer these questions by examining data accumulated in the past 15 years. Analyses of increased stream discharges and their geochemistry after large earthquakes show evidence that the excess water originates from groundwater released from high elevations by large increase of the vertical permeability. Water-level data from a dense network of clustered wells in a sedimentary basin near the epicenter of the 1999 M7.6 Chi-Chi earthquake in western Taiwan show that, while most confined aquifers remained confined after the earthquake, about 10% of the clustered wells show evidence of coseismic breaching of aquitards and a great increase of the vertical permeability. Water level in wells without evidence of coseismic breaching of aquitards show similar tidal response before and after the earthquake; wells with evidence of coseismic breaching of aquitards, on the other hand, show distinctly different tidal response before and after the earthquake and that the aquifers became hydraulically connected for many months thereafter. Breaching of aquitards by large earthquakes has significant implications for a number of societal issues such as the safety of water resources, the security of underground waste repositories, and the production of oil and gas. The method demonstrated here may be used for detecting the occurrence of aquitard breaching by large earthquakes in other seismically active areas.

  3. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  4. Great Earthquakes With and Without Large Slip to the Trench

    NASA Astrophysics Data System (ADS)

    Mori, J. J.

    2013-12-01

    The 2011 Tohoku-oki earthquake produced a huge amount of slip (40 to 60 meters) on the shallow portion of the subduction zone close to the trench. This large displacement was largely unexpected for this region and caused the very large and damaging tsunami along the northeast coast of Honshu. For other subduction zones around the world, we examine the possibility of large slip to the trench in past large and great earthquakes. Since the trench region is generally far offshore, it is often difficult to resolve the amount of slip from onland geodetic and strong-motion data. We use a variety of observations, including slip distribution models, aftershock locations, local coastal deformation, and tsunami heights to determine which events likely had large amounts of slip close to the trench. Tsunami earthquakes, such as 1992 Nicaragua and 2006 Java likely had large shallow slip. Some typical subduction earthquakes, such as 1968 Tokachi-oki and 2003 Tokachi-oki (located in regions north of the source area of the 2011 Tohoku-oki earthquake) likely did not. We will discuss possible factors that influence the slip distribution on the shallow area of subduction megathrusts. Using results from the Japan Trench Fast Drilling Project (JFAST) which sampled the fault in the region of large slip, we can begin to understand the conditions of very large fault slip. Are there characteristic features in the material properties for faults that have large slip ? Can we determine if these regions have high plate coupling and accumulate stress ?

  5. Tremor and the Depth Extent of Slip in Large Earthquakes

    NASA Astrophysics Data System (ADS)

    BEroza, G. C.; Brown, J. R.; Ide, S.

    2013-05-01

    We survey the evidence for the distribution of tremor and mainshock slip. In Southwest Japan, where tremor is well located, it outlines the down-dip edge of slip in the 1944 and 1946 Nankai earthquakes. In Alaska and the Aleutians, tremor location and slip distributions in slip are subject to greater uncertainty, but within that uncertainty they are consistent with the notion that tremor outlines the down-dip limit of mainshock slip. In Mexico, tremor locations and the extent of rupture in large (M > 7) earthquakes are also uncertain, but show a similar relationship. Taken together, these observations suggest that tremor may provide important information on the depth extent of rupture in large earthquakes where there have been no large earthquakes to test that hypothesis. If applied to the Cascadia subduction zone, it suggests slip will extend farther inland than previously assumed. If applied to the San Andreas Fault, it suggests slip will extend deeper than has previously been assumed.

  6. Outline of the 2016 Kumamoto, Japan, Earthquakes and lessons for a large urban earthquake in Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Hirata, N.

    2016-12-01

    A series of devastating earthquakes hit Kumamoto districts in Kyushu, Japan, in April, 2016. The M6.5 event occurred at 21:26 on April 14th (JST) and, 28 hours later, the M7.3 event occurred at 01:25 on April 17th (JST) at almost the same location with a depth of 10 km. The both earthquakes were felt with a seismic intensity of 7 in Japan Metrological Agency (JMA) scale at Mashiki Town. The intensity of 7 is the highest level by definition. Very strong accelerations are observed by the M6.5 event with 1,580 gal at KiK-net Mashiki station and 1,791 gal by the M7.3 event at Ohtsu City station. As a result, more than 8,000 houses are totally collapsed, 26,000 are heavily collapsed, and 120,000 are partially damaged. There are 49 people directly killed and 32 are indirectly killed by the quakes. The most important lesson from the Kumamoto earthquake is that a very strong ground motion may hit immediately after the first large event, say in a few days. This has serious impact to a house damaged by the first large quake. In the 2016 Kumamoto sequence there are also many strong aftershocks including 4 M5.8-5.9 events till April 18th. More than 180,000 people, at most, took shelter because of scaring many strong aftershocks. I will discuss both natural and human aspects of the Kumamoto earthquake disaster by the in-land shallow large earthquakes suggesting lessons for the large Metropolitan Earthquakes in Tokyo, Japan.

  7. The velocity effects of large historical earthquakes in Chinese mainland

    NASA Astrophysics Data System (ADS)

    Tan, Weijie; Dong, Danan; Wu, Bin

    2016-04-01

    Accompanying with the collision between Indian and Eurasian plates, China has experienced decadal large earthquakes over the past 100 years. These large earthquakes are mainly located along several seismic belts in Tien Shan, Tibet Plateau, and Northern China. The postseismic deformation and stress accumulation induced by the historical earthquakes is important for assess the contemporary seismic hazards. The postseismic deformation induced by historical large earthquakes also influences the observed present day velocity field. The relaxation of the viscoelastic asthenosphere is modeled on a layered spherically symmetric earth with Maxwell rheology. The layer's thickness, the density p and the P-wave velocity Vp are from PREM. The shear modulus are derived from the p and Vp. The viscosity between lower crust and upper mantle adopted in this study is 1×1019 Pa.s. Viscoelastic relaxation contributions due to 34 historical large earthquakes in China from 1900 to 2001 are calculated using VISCO1D-v3 program developed by Pollitz (1997). We calculated the model predicted velocity field in 2015 in China caused by historical big earthquakes. The pattern of predicted velocity field is consistent with the present movement of crust, with peak velocities reaching 6mm yr-1. The region of Southwestern China moves northeastwards, and also a significant rotation occurred at the edge of the Tibetan Plateau. The velocity field caused by historical large earthquakes provides a base to isolate the velocity field caused by the contemporary tectonic movement from the geodetic observations. It also provides critical information to investigate the regional stress accumulation and to assess the mid-term to long-term earthquake risk.

  8. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  9. Deeper penetration of large earthquakes on seismically quiescent faults

    NASA Astrophysics Data System (ADS)

    Jiang, Junle; Lapusta, Nadia

    2016-06-01

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  10. Deeper penetration of large earthquakes on seismically quiescent faults.

    PubMed

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  11. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  12. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  13. Local observations of the onset of a large earthquake: 28 June 1992 Landers, California

    USGS Publications Warehouse

    Abercrombie, Richael; Mori, Jim

    1994-01-01

    The Landers earthquake (MW 7.3) of 28 June 1992 had a very emergent onset. The first large amplitude arrivals are delayed by about 3 sec with respect to the origin time, and are preceded by smaller-scale slip. Other large earthquakes have been observed to have similar emergent onsets, but the Landers event is one of the first to be well recorded on nearby stations. We used these recordings to investigate the spatial relationship between the hypocenter and the onset of the large energy release, and to determine the slip function of the 3-sec nucleation process. Relative location of the onset of the large energy release with respect to the initial hypocenter indicates its source was between 1 and 4 km north of the hypocenter and delayed by approximately 2.5 sec. Three-station array analysis of the P wave shows that the large amplitude onset arrives with a faster apparent velocity compared to the first arrivals, indicating that the large amplitude source was several kilometers deeper than the initial onset. An ML 2.8 foreshock, located close to the hypocenter, was used as an empirical Green's function to correct for path and site effects from the first 3 sec of the mainshock seismogram. The resultant deconvolution produced a slip function that showed two subevents preceding the main energy release, an MW4.4 followed by an MW 5.6. These subevents do not appear anomalous in comparison to simple moderate-sized earthquakes, suggesting that they were normal events which just triggered or grew into a much larger earthquake. If small and moderate-sized earthquakes commonly “detonate” much larger events, this implies that the dynamic stresses during earthquake rupture are at least as important as long-term static stresses in causing earthquakes, and the prospects of reliable earthquake prediction from premonitory phenomena are not improved.

  14. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  15. Large earthquakes create vertical permeability by breaching aquitards

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Yuen; Liao, Xin; Wang, Lee-Ping; Wang, Chung-Ho; Manga, Michael

    2016-08-01

    Hydrologic responses to earthquakes and their mechanisms have been widely studied. Some responses have been attributed to increases in the vertical permeability. However, basic questions remain: How do increases in the vertical permeability occur? How frequently do they occur? Is there a quantitative measure for detecting the occurrence of aquitard breaching? We try to answer these questions by examining data from a dense network of ˜50 monitoring stations of clustered wells in a sedimentary basin near the epicenter of the 1999 M7.6 Chi-Chi earthquake in western Taiwan. While most stations show evidence that confined aquifers remained confined after the earthquake, about 10% of the stations show evidence of coseismic breaching of aquitards, creating vertical permeability as high as that of aquifers. The water levels in wells without evidence of coseismic breaching of aquitards show tidal responses similar to that of a confined aquifer before and after the earthquake. Those wells with evidence of coseismic breaching of aquitards, on the other hand, show distinctly different postseismic tidal response. Furthermore, the postseismic tidal response of different aquifers became strikingly similar, suggesting that the aquifers became hydraulically connected and the connection was maintained many months thereafter. Breaching of aquitards by large earthquakes has significant implications for a number of societal issues such as the safety of water resources, the security of underground waste repositories, and the production of oil and gas. The method demonstrated here may be used for detecting the occurrence of aquitard breaching by large earthquakes in other seismically active areas.

  16. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  17. Large earthquake processes in the northern Vanuatu subduction zone

    NASA Astrophysics Data System (ADS)

    Cleveland, K. Michael; Ammon, Charles J.; Lay, Thorne

    2014-12-01

    The northern Vanuatu (formerly New Hebrides) subduction zone (11°S to 14°S) has experienced large shallow thrust earthquakes with Mw > 7 in 1966 (MS 7.9, 7.3), 1980 (Mw 7.5, 7.7), 1997 (Mw 7.7), 2009 (Mw 7.7, 7.8, 7.4), and 2013 (Mw 8.0). We analyze seismic data from the latter four earthquake sequences to quantify the rupture processes of these large earthquakes. The 7 October 2009 earthquakes occurred in close spatial proximity over about 1 h in the same region as the July 1980 doublet. Both sequences activated widespread seismicity along the northern Vanuatu subduction zone. The focal mechanisms indicate interplate thrusting, but there are differences in waveforms that establish that the events are not exact repeats. With an epicenter near the 1980 and 2009 events, the 1997 earthquake appears to have been a shallow intraslab rupture below the megathrust, with strong southward directivity favoring a steeply dipping plane. Some triggered interplate thrusting events occurred as part of this sequence. The 1966 doublet ruptured north of the 1980 and 2009 events and also produced widespread aftershock activity. The 2013 earthquake rupture propagated southward from the northern corner of the trench with shallow slip that generated a substantial tsunami. The repeated occurrence of large earthquake doublets along the northern Vanuatu subduction zone is remarkable considering the doublets likely involved overlapping, yet different combinations of asperities. The frequent occurrence of large doublet events and rapid aftershock expansion in this region indicate the presence of small, irregularly spaced asperities along the plate interface.

  18. 1/f and the Earthquake Problem: Scaling constraints to facilitate operational earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Yoder, M. R.; Rundle, J. B.; Glasscoe, M. T.

    2013-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or '1/f', nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this '1/f problem,' it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area), in combination with a metric to quantify rate trends in local seismicity, to the local earthquake magnitude potential - the magnitudes of earthquakes the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.

  19. Crack fusion dynamics: A model for large earthquakes

    SciTech Connect

    Newman, W.I.; Knopoff, L.

    1982-07-01

    The physical processes of the fusion of small cracks into larger ones are nonlinear in character. A study of the nonlinear properties of fusion may lead to an understanding of the instabilities that give rise to clustering of large earthquakes. We have investigated the properties of simple versions of fusion processes to see if instabilities culminating in repetitive massive earthquakes are possible. We have taken into account such diverse phenomena as the production of aftershocks, the rapid extension of large cracks to overwhelm and absorb smaller cracks, the influence of anelastic creep-induced time delays, healing, the genesis of ''juvenile'' cracks due to plate motions, and others. A preliminary conclusion is that the time delays introduced by anelastic creep may be responsible for producing catastrophic instabilities characteristic of large earthquakes as well as aftershock sequences. However, it seems that nonlocal influences, i.e., the spatial diffusion of cracks, may play a dominant role in producing episodes of seismicity and clustering.

  20. Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults

    USGS Publications Warehouse

    Harris, R.A.

    2004-01-01

    Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.

  1. Power-law time distribution of large earthquakes.

    PubMed

    Mega, Mirko S; Allegrini, Paolo; Grigolini, Paolo; Latora, Vito; Palatella, Luigi; Rapisarda, Andrea; Vinciguerra, Sergio

    2003-05-09

    We study the statistical properties of time distribution of seismicity in California by means of a new method of analysis, the diffusion entropy. We find that the distribution of time intervals between a large earthquake (the main shock of a given seismic sequence) and the next one does not obey Poisson statistics, as assumed by the current models. We prove that this distribution is an inverse power law with an exponent mu=2.06+/-0.01. We propose the long-range model, reproducing the main properties of the diffusion entropy and describing the seismic triggering mechanisms induced by large earthquakes.

  2. An earthquake strength scale for the media and the public

    USGS Publications Warehouse

    Johnston, A.C.

    1990-01-01

    A local engineer, E.P Hailey, pointed this problem out to me shortly after the Loma Prieta earthquake. He felt that three problems limited the usefulness of magnitude in describing an earthquake to the public; (1) most people don't understand that it is not a linear scale; (2) of those who do realized the scale is not linear, very few understand the difference of a factor of ten in ground motion and 32 in energy release between points on the scale; and (3) even those who understand the first two points have trouble putting a given magnitude value into terms they can relate to. In summary, Mr. Hailey wondered why seismologists can't come up with an earthquake scale that doesn't confuse everyone and that conveys a sense of true relative size. Here, then, is m attempt to construct such a scale

  3. Automated Determination of Magnitude and Source Extent of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun

    2017-04-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  4. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  5. Large Earthquakes in Developing Countries: Estimating and Reducing their Consequences

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2003-12-01

    Recent efforts to reduce the risk of earthquakes in developing countries have been diverse, earnest, and inadequate. The earthquake risk in developing countries is large and growing rapidly. It is largely ignored. Unless something is done - quickly - to reduce it, both developing and developed countries will suffer human and economic losses far greater than have been experienced in the past. GeoHazards International (GHI) is a nonprofit organization that has attempted to reduce the death and suffering caused by earthquakes in the world's most vulnerable communities, through preparedness, mitigation and prevention. Its approach has included raising awareness, strengthening local institutions and launching mitigation activities, particularly for schools. GHI and its partners around the world have achieved some success: thousands of school children are safer, hundreds of cities are aware of their risk, tens of cities have been assessed and advised, and some local organizations have been strengthened. But there is disturbing evidence that what is being done is insufficient. The problem outpaces the cure. A new program is now being considered that would attempt to improve earthquake-resistant construction of schools, internationally, by publicizing well-managed programs around the world that design, construct and maintain earthquake-resistant schools. While focused on schools, this program might have broader applications in the future.

  6. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  7. Large historical earthquakes and tsunamis in a very active tectonic rift: the Gulf of Corinth, Greece

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Ioanna; Papadopoulos, Gerassimos

    2014-05-01

    The Gulf of Corinth is an active tectonic rift controlled by E-W trending normal faults with an uplifted footwall in the south and a subsiding hangingwall with antithetic faulting in the north. Regional geodetic extension rates up to about 1.5 cm/yr have been measured, which is one of the highest for tectonic rifts in the entire Earth, while seismic slip rates up to about 1 cm/yr were estimated. Large earthquakes with magnitudes, M, up to about 7 were historically documented and instrumentally recorded. In this paper we have compiled historical documentation of earthquake and tsunami events occurring in the Corinth Gulf from the antiquity up to the present. The completeness of the events reported improves with time particularly after the 15th century. The majority of tsunamis were caused by earthquake activity although the aseismic landsliding is a relatively frequent agent for tsunami generation in Corinth Gulf. We focus to better understand the process of tsunami generation from earthquakes. To this aim we have considered the elliptical rupture zones of all the strong (M≥ 6.0) historical and instrumental earthquakes known in the Corinth Gulf. We have taken into account rupture zones determined by previous authors. However, magnitudes, M, of historical earthquakes were recalculated from a set of empirical relationships between M and seismic intensity established for earthquakes occurring in Greece during the instrumental era of seismicity. For this application the macroseismic field of each one of the earthquakes was identified and seismic intensities were assigned. Another set of empirical relationships M/L and M/W for instrumentally recorded earthquakes in the Mediterranean region was applied to calculate rupture zone dimensions; where L=rupture zone length, W=rupture zone width. The rupture zones positions were decided on the basis of the localities of the highest seismic intensities and co-seismic ground failures, if any, while the orientation of the maximum

  8. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    NASA Astrophysics Data System (ADS)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  9. Variation of large elastodynamic earthquakes on complex fault systems

    NASA Astrophysics Data System (ADS)

    Shaw, B. E.

    2004-12-01

    One of the biggest assumptions, and a source of some of the biggest uncertainties in earthquake hazard estimation is the role of fault segmentation in controlling large earthquake ruptures. Here we apply a new model which produces sequences of elastodynamic earthquake events on complex segmented fault systems, and use these simulations to quantify the variation of large events. We find a number of important systematic effects of segment geometry on the slip variation and the repeat time variation of large events, including an increase in variation at the ends of segments and a decrease in variation for the longest segments. We find both quantitative and qualitative differences between slip variation and time variation, so slip variation and time variation are not simple proxies for eachother. The model both generates self-consistent complex fault geometries, and generates self-consistent elastodynamic events on those geometries. This geometrical self-consistency is important in insuring strain is compatibly accommodated in t he long run over many earthquake cycles. The self-consistency also reduces the number of things which must be specified, by allowing the fault system to self-organize from a simple physics. Because of the numerical efficiency of the model, we can generate long sequences of events, and study the statistics of the populations. The long sequences are critical here in that the stresses left over by previous events form the setting for subsequent events. With this model, we can thus begin to address the fundamental questions of the interaction of geometry and dynamics over many earthquake cycles.

  10. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity.

  11. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  12. Earthquake Hazard and Risk Assessment Based on Unified Scaling Law for Earthquakes: State of Gujarat, India

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Nekrasova, Anastasia; Kossobokov, Vladimir

    2017-03-01

    The Gujarat state of India is one of the most seismically active intercontinental regions of the world. Historically, it has experienced many damaging earthquakes including the devastating 1819 Rann of Kachchh and 2001 Bhuj earthquakes. The effect of the later one is grossly underestimated by the Global Seismic Hazard Assessment Program (GSHAP). To assess a more adequate earthquake hazard for the state of Gujarat, we apply Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter recurrence relation taking into account naturally fractal distribution of earthquake loci. USLE has evident implications since any estimate of seismic hazard depends on the size of the territory considered and, therefore, may differ dramatically from the actual one when scaled down to the proportion of the area of interest (e.g. of a city) from the enveloping area of investigation. We cross-compare the seismic hazard maps compiled for the same standard regular grid 0.2° × 0.2° (1) in terms of design ground acceleration based on the neo-deterministic approach, (2) in terms of probabilistic exceedance of peak ground acceleration by GSHAP, and (3) the one resulted from the USLE application. Finally, we present the maps of seismic risks for the state of Gujarat integrating the obtained seismic hazard, population density based on India's Census 2011 data, and a few model assumptions of vulnerability.

  13. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  14. Fast rupture propagation for large strike-slip earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Mori, Jim; Koketsu, Kazuki

    2016-04-01

    Studying rupture speeds of shallow earthquakes is of broad interest because it has a large effect on the strong near-field shaking that causes damage during earthquakes, and it is an important parameter that reflects stress levels and energy on a slipping fault. However, resolving rupture speed is difficult in standard waveform inversion methods due to limited near-field observations and the tradeoff between rupture speed and fault size for teleseismic observations. Here we applied back-projection methods to estimate the rupture speeds of 15 Mw ≥ 7.8 dip-slip and 8 Mw ≥ 7.5 strike-slip earthquakes for which direct P waves are well recorded in Japan on Hi-net, or in North America on USArray. We found that all strike-slip events had very fast average rupture speeds of 3.0-5.0 km/s, which are near or greater than the local shear wave velocity (supershear). These values are faster than for thrust and normal faulting earthquakes that generally rupture with speeds of 1.0-3.0 km/s.

  15. Firebrands and spotting ignition in large-scale fires

    Treesearch

    Eunmo Koo; Patrick J. Pagni; David R. Weise; John P. Woycheese

    2010-01-01

    Spotting ignition by lofted firebrands is a significant mechanism of fire spread, as observed in many largescale fires. The role of firebrands in fire propagation and the important parameters involved in spot fire development are studied. Historical large-scale fires, including wind-driven urban and wildland conflagrations and post-earthquake fires are given as...

  16. The spatial distribution of earthquake stress rotations following large subduction zone earthquakes

    NASA Astrophysics Data System (ADS)

    Hardebeck, Jeanne L.

    2017-05-01

    Rotations of the principal stress axes due to great subduction zone earthquakes have been used to infer low differential stress and near-complete stress drop. The spatial distribution of coseismic and postseismic stress rotation as a function of depth and along-strike distance is explored for three recent M ≥ 8.8 subduction megathrust earthquakes. In the down-dip direction, the largest coseismic stress rotations are found just above the Moho depth of the overriding plate. This zone has been identified as hosting large patches of large slip in great earthquakes, based on the lack of high-frequency radiated energy. The large continuous slip patches may facilitate near-complete stress drop. There is seismological evidence for high fluid pressures in the subducted slab around the Moho depth of the overriding plate, suggesting low differential stress levels in this zone due to high fluid pressure, also facilitating stress rotations. The coseismic stress rotations have similar along-strike extent as the mainshock rupture. Postseismic stress rotations tend to occur in the same locations as the coseismic stress rotations, probably due to the very low remaining differential stress following the near-complete coseismic stress drop. The spatial complexity of the observed stress changes suggests that an analytical solution for finding the differential stress from the coseismic stress rotation may be overly simplistic, and that modeling of the full spatial distribution of the mainshock static stress changes is necessary.[Figure not available: see fulltext.

  17. The spatial distribution of earthquake stress rotations following large subduction zone earthquakes

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2017-01-01

    Rotations of the principal stress axes due to great subduction zone earthquakes have been used to infer low differential stress and near-complete stress drop. The spatial distribution of coseismic and postseismic stress rotation as a function of depth and along-strike distance is explored for three recent M ≥ 8.8 subduction megathrust earthquakes. In the down-dip direction, the largest coseismic stress rotations are found just above the Moho depth of the overriding plate. This zone has been identified as hosting large patches of large slip in great earthquakes, based on the lack of high-frequency radiated energy. The large continuous slip patches may facilitate near-complete stress drop. There is seismological evidence for high fluid pressures in the subducted slab around the Moho depth of the overriding plate, suggesting low differential stress levels in this zone due to high fluid pressure, also facilitating stress rotations. The coseismic stress rotations have similar along-strike extent as the mainshock rupture. Postseismic stress rotations tend to occur in the same locations as the coseismic stress rotations, probably due to the very low remaining differential stress following the near-complete coseismic stress drop. The spatial complexity of the observed stress changes suggests that an analytical solution for finding the differential stress from the coseismic stress rotation may be overly simplistic, and that modeling of the full spatial distribution of the mainshock static stress changes is necessary.

  18. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  19. Nonlinear elastic effects on permanent deformation due to large earthquakes

    NASA Astrophysics Data System (ADS)

    Bataille, Klaus; Contreras, Marcelo

    2009-06-01

    Large earthquakes generate significant deformations near fault zones, and it is known that under such conditions rocks do not behave linearly. This nonlinearity is generally assumed to be due to ductile processes, however, some nonlinear elastic behavior is possible as well, and this should have an effect on the ground deformation near fault zones which might be observable seismically or geodetically. We calculate the difference of the permanent displacement field, due to an earthquake, when we consider Hooke's law and a general nonlinear law. Under the simplified assumption that the nonlinear part of the constitutive law has a small effect, we use a perturbation approach and keep only the first term. This first term depends on Mo/r3, implying that for large Moments ( Mo) and small distances ( r), the seismic Moment (as obtained far away from the source and within the linear regime) will always be greater than the geodetic Moment (obtained within the nonlinear regime). This result is in principle against our intuition since seismic Moments relate to the rupture process of the main event, which normally takes place during minutes, while geodetic Moments relate to the rupture process of the main event plus foreshocks (if any), aftershocks and other slow afterslip processes, which normally occur during hours and days. Aftershocks normally share the same mechanism as the main event, increasing thus the total Moment of the sequence, therefore, geodetic Moment should intuitively be greater than the seismic Moment. Interestingly, this reasoning yields the opposite effect than what is observed for large earthquakes as the cases of the Chile 1960 ( Mw=9.5), Alaska 1964 ( Mw=9.2) and some evidence for the Sumatra 2005 ( Mw=9.3) as well as other smaller earthquakes. If the difference of geodetic and seismic Moments is a real phenomenon, it could be due to several factors. We suggest here that one of these factors, could be due to a nonlinear elastic effect.

  20. Scaling Relations of Source Parameters of Earthquakes Occurring on Inland Crustal Mega-Fault Systems

    NASA Astrophysics Data System (ADS)

    Murotani, Satoko; Matsushima, Shinichi; Azuma, Takashi; Irikura, Kojiro; Kitagawa, Sadayuki

    2015-05-01

    We examined a new scaling relation between source area S and seismic moment M 0 for large crustal earthquakes on "mega-fault" systems, including earthquakes with magnitudes larger than M w7.4. We focused on earthquakes that occurred on inland crustal mega-fault systems, such as the 2008 Wenchuan and 2002 Denali earthquakes, and compiled the source parameters using 11 inland crustal earthquakes which analyses of source rupture processes by waveform inversion as well as investigation of surface ruptures via geomorphological surveys. We found that the maximum surface rupture displacement is two to three times larger than the average slip on the source fault, and the length of the surface rupture is equivalent to the length of the source fault. Furthermore, our compiled data shows the displacement of the surface rupture D saturates around 10 m when the length of the surface rupture L reaches 100 km. Assuming that the average width of the source fault W = 18 km (for Japanese inland crustal earthquakes) and the saturated surface displacement D = 10 m, we found that the scaling relations between rupture area S and seismic moment M 0 have three stages. For the first stage, S is proportional to M {0/2/3} for earthquakes smaller than M 0 = 7.5 × 1018 Nm. For the second stage, S ranges from M {0/1/2} to M {0/2/3}, depending on the thickness of the seismogenic zone. For the third stage, S is proportional to M 0 because of the saturation of the slip on the fault. From our compiled data, we derived the third scaling relation between source area S and seismic moment M 0 for inland crustal mega-fault systems to be S (km2) = 1.0 × 10-17 M 0 (Nm), where M 0 > 1.8 × 1020 (Nm).

  1. Calibration of magnitude scales for earthquakes of the Mediterranean

    NASA Astrophysics Data System (ADS)

    Gardini, Domenico; di Donato, Maria; Boschi, Enzo

    In order to provide the tools for uniform size determination for Mediterranean earthquakes over the last 50-year period of instrumental seismology, we have regressed the magnitude determinations for 220 earthquakes of the European-Mediterranean region over the 1977-1991 period, reported by three international centres, 11 national and regional networks and 101 individual stations and observatories, using seismic moments from the Harvard CMTs. We calibrate M(M0) regression curves for the magnitude scales commonly used for Mediterranean earthquakes (ML, MWA, mb, MS, MLH, MLV, MD, M); we also calibrate static corrections or specific regressions for individual observatories and we verify the reliability of the reports of different organizations and observatories. Our analysis shows that the teleseismic magnitudes (mb, MS) computed by international centers (ISC, NEIC) provide good measures of earthquake size, with low standard deviations (0.17-0.23), allowing one to regress stable regional calibrations with respect to the seismic moment and to correct systematic biases such as the hypocentral depth for MS and the radiation pattern for mb; while mb is commonly reputed to be an inadequate measure of earthquake size, we find that the ISC mb is still today the most precise measure to use to regress MW and M0 for earthquakes of the European-Mediterranean region; few individual observatories report teleseismic magnitudes requiring specific dynamic calibrations (BJI, MOS). Regional surface-wave magnitudes (MLV, MLH) reported in Eastern Europe generally provide reliable measures of earthquake size, with standard deviations often in the 0.25-0.35 range; the introduction of a small (±0.1-0.2) static station correction is sometimes required. While the Richter magnitude ML is the measure of earthquake size most commonly reported in the press whenever an earthquake strikes, we find that ML has not been computed in the European-Mediterranean in the last 15 years; the reported local

  2. Exploring the uncertainty range of coseismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2017-01-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average coseismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same data set exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that (1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated and (2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  3. Premonitory patterns of seismicity months before a large earthquake: Five case histories in Southern California

    PubMed Central

    Keilis-Borok, V. I.; Shebalin, P. N.; Zaliapin, I. V.

    2002-01-01

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with “intermediate” lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1–7 months five large (M ≥ 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns. PMID:12482945

  4. Premonitory patterns of seismicity months before a large earthquake: five case histories in Southern California.

    PubMed

    Keilis-Borok, V I; Shebalin, P N; Zaliapin, I V

    2002-12-24

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with "intermediate" lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1-7 months five large (M > or = 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns.

  5. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  6. Large-scale circuit simulation

    NASA Astrophysics Data System (ADS)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  7. Large Scale Dynamos in Stars

    NASA Astrophysics Data System (ADS)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  8. Evaluation of factors controlling large earthquake-induced landslides by the Wenchuan earthquake

    NASA Astrophysics Data System (ADS)

    Chen, X. L.; Ran, H. L.; Yang, W. T.

    2012-12-01

    During the 12 May 2008, Wenchuan earthquake in China, more than 15 000 landslides were triggered by the earthquake. Among these landslides, there were 112 large landslides generated with a plane area greater than 50 000 m2. These large landslides were markedly distributed closely along the surface rupture zone in a narrow belt and were mainly located on the hanging wall side. More than 85% of the large landslides are presented within the range of 10 km from the rupture. Statistical analysis shows that more than 50% of large landslides occurred in the hard rock and second-hard rock, like migmatized metamorphic rock and carbonate rock, which crop out in the south part of the damaged area with higher elevation and steeper landform in comparison with the northeast part of the damaged area. All large landslides occurred in the region with seismic intensity ≥ X except a few of landslides in the Qingchuan region with seismic intensity IX. Spatially, the large landslides can be centred into four segments, namely the Yingxiu, the Gaochuan, the Beichuan and the Qingchuan segments, from southwest to northeast along the surface rupture. This is in good accordance with coseismic displacements. With the change of fault type from reverse-dominated slip to dextral slip from southwest to northeast, the largest distance between the triggered large landslides and the rupture decreases from 15 km to 5 km. The critical acceleration ac for four typical large landslides in these four different segments were estimated by the Newmark model in this paper. Our results demonstrate that, given the same strength values and slope angles, the characteristics of slope mass are important for slope stability and deeper landslides are less stable than shallower landslides. Comprehensive analysis reveals that the large catastrophic landslides could be specifically tied to a particular geological setting where fault type and geometry change abruptly. This feature may dominate the occurrence of large

  9. Galaxy clustering on large scales.

    PubMed Central

    Efstathiou, G

    1993-01-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  10. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  11. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  12. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  13. Mechanical model of precursory source processes for some large earthquakes

    SciTech Connect

    Dmorvska, R.; Li, V.C.

    1982-04-01

    A mechanical model is presented of precursory source processes for some large earthquakes along plate boundaries. It is assumed that the pre-seismic period consists of the upward progression of a zone of slip from lower portions of the lithosphere towards the Earth's surface. The slip front is blocked by local asperities of different size and strength; these asperities may be zones of real alteration of inherent strength, or instead may be zones which are currently stronger due to a local slowdown of a basically rate-dependent frictional response. Such blocking by a single, large asperity, or array of asperities, produces quiescence over a segment of plate boundary, until gradual increase of the stress concentration forces the slip zone through the blocked region at one end of the gap, thus nucleating a seismic rupture that propogates upwards and towards the other end. This model is proposed to explain certain distinctive seismicity patterns that have been observed before large earthquakes, notably quiescence over the gap zone followed by clustering at its end prior to the main event. A discussion of mechanical factors influencing the process is presented and some introductory modelling, performed with the use of a generalized Elsasser model for lithospheric plates and the ''line spring'' model for part-through flaws (slip zones) at plate boundaries, is outlined briefly.

  14. Possibility of short-term probabilistic forecasts for large earthquakes making good use of the limitations of existing catalogs

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Iwayama, Koji; Aihara, Kazuyuki

    2016-10-01

    Earthquakes are quite hard to predict. One of the possible reasons can be the fact that the existing catalogs of past earthquakes are limited at most to the order of 100 years, while their characteristic time scale is sometimes greater than that time span. Here we rather use these limitations positively and characterize some large earthquake events as abnormal events that are not included there. When we constructed probabilistic forecasts for large earthquakes in Japan based on similarity and difference to their past patterns—which we call known and unknown abnormalities, respectively—our forecast achieved probabilistic gains of 5.7 and 2.4 against a time-independent model for main shocks with the magnitudes of 7 or above. Moreover, the two abnormal conditions covered 70% of days whose maximum magnitude was 7 or above.

  15. Spatial correlation of large historical earthquakes and moderate shocks >10 km deep in eastern North America

    SciTech Connect

    Acharya, H.

    1980-12-01

    A good spatial correlation is noted between historical earthquakes with epicentral intensity > or =VIII (MM) and recent moderate size earthquakes with focal depth >10 km, suggesting that large historical earthquakes in eastern North America may be associated with deep-seated faults

  16. Earthquake Apparent Stress Scaling for the 1999 Hector Mine Sequence

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.

    2003-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of studies finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Other studies find the apparent stress increases with magnitude (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We try to improve upon earlier results by using consistent techniques over common paths for a wide range of sizes and seismic phases. We have examined about 130 earthquakes from the Hector Mine earthquake sequence in Southern California. These earthquakes range in size from the October 16,1999 Mw=7.1 mainshock down to ML=3.0 aftershocks into 2000. The mainshock has unclipped Pg and Lg phases at a number of high quality regional stations (e.g. CMB, ELK, TUC) where we can use the common path to examine apparent stress scaling relations directly. We are careful to avoid any event selection bias that would be related to apparent stress values. We fix each stations path correction using the independent moment and energy estimates for the mainshock. We then use those corrections to determine the seismic energy for each event based on regional Lg spectra. We use a modeling technique (MDAC) based on a modified Brune (1970) spectral shape but without any assumptions of corner-frequency scaling (Walter and Taylor, 2002). We perform similar analysis using the Pg spectra. We find the energy estimates for the same events are consistent for Lg estimates, Pg estimates and the estimates using the independent regional coda envelope technique (Mayeda and Walter, 1996; Mayeda et al

  17. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Thompson, T. Ben; Meade, Brendan J.

    2017-03-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity is the computational costs of large-scale viscoelastic earthquake cycle models. Computationally intensive viscoelastic codes must be evaluated at thousands of times and locations, and as a result, studies tend to adopt a few fixed rheological structures and model geometries and examine the predicted time-dependent deformation over short (<10 years) time periods at a given depth after a large earthquake. Training a deep neural network to learn a computationally efficient representation of viscoelastic solutions, at any time, location, and for a large range of rheological structures, allows these calculations to be done quickly and reliably, with high spatial and temporal resolutions. We demonstrate that this machine learning approach accelerates viscoelastic calculations by more than 50,000%. This magnitude of acceleration will enable the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible.

  18. Potential for geophysical experiments in large scale tests

    SciTech Connect

    Dieterich, J.H.

    1981-07-01

    Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (<40 MPa) bi-axial apparatus demonstrate that a minimum fault length is required to generate confined shear instabilities along pre-existing faults. Experimental analysis of source interactions for simulated earthquakes consisting of confined shear instabilities on a fault with gouge appears to require large specimens (approx.1m) and high confining pressures (>100 MPa).

  19. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  20. Large scale biomimetic membrane arrays.

    PubMed

    Hansen, Jesper S; Perry, Mark; Vogel, Jörg; Groth, Jesper S; Vissing, Thomas; Larsen, Marianne S; Geschke, Oliver; Emneús, Jenny; Bohr, Henrik; Nielsen, Claus H

    2009-10-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO(2) laser micro-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 microm. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays, and furthermore demonstrate that the design can conveniently be scaled up to support planar lipid bilayers in large square-centimeter partition arrays.

  1. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  2. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  3. Source scaling relationships of small earthquakes estimated from the inversion method using stopping phases

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Takeo, M.; Ito, H.; Ellsworth, W.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.

    2002-12-01

    attenuation in the crust. This is consistent with the conclusion by Stork et al. (2002) inferred from the spectral analysis using the 800m deep borehole data. The average values of rupture velocity do not depend on earthquake size, and are similar to those reported for moderate and large earthquakes. We then calculate the seismic energy following Sato and Hirasawa (1973). The magnitude scaling of the apparent stress is almost constant in the analyzed events, ranging from 0.05 to 1 MPa. Since most of apparent stresses for large earthquakes are in the range of 0.1 to 10 MPa, there may be small differences in apparent stress between large and small earthquakes. However, it is likely that earthquakes are self-similar over a wide range of earthquake size and the dynamics of small and large earthquakes are similar from a macroscopic viewpoint.

  4. Earthquake Source Scaling and Wave Propagation in Eastern North America: The Au Sable Forks, NY, Earthquake

    NASA Astrophysics Data System (ADS)

    Viegas, G.; Abercrombie, R.; Baise, L.; Kim, W.

    2005-12-01

    The 2002, M5 Au Sable Forks, NY earthquake and aftershocks are the best recorded sequence in the North Eastern USA. We use the local and regional recordings to investigate the characteristics of intraplate seismicity, focusing on source scaling relationships and regional wave propagation. A portable local network of 11 stations, recorded 74 aftershocks of M<3.2. We relocate the mainshock and early aftershocks using a master event technique. We then use the double-difference relocation method using differential travel times measured from waveform cross-correlation to relocate the aftershocks recorded by the local network. Both the master-event and double-difference location methods produce consistent results suggesting complex conjugate faulting during the sequence. We identify a number of highly clustered groups of earthquakes suitable for EGF analysis. We use the EGF method to calculate the stress drop and radiated energy of the larger aftershocks to determine how they compare to moderate magnitude earthquakes, and also whether they differ significantly from interplate earthquakes. We consider the 9 largest aftershocks (M3.7 to M2), which were recorded on the regional network, as potential EGFs for the mainshock, but they have focal mechanisms and locations that are sufficiently different that we cannot resolve the mainshock source time function well. They are good enough to enable us to place constraints on the shape and duration of the source pulse to use in modeling the regional waveforms. We investigate the crustal structure in New York (Grenville) and New England (Appalachian) through forward modeling of the Au Sable Forks regional broadband records. We compute synthetic records of wave propagation in a layered medium, using published crustal models of the two regions as a starting point. We identify differences between the recorded data and synthetics for the Grenville and the Appalachian regions and improve the crustal models to better fit the recorded

  5. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  6. Access Time of Emergency Vehicles Under the Condition of Street Blockages after a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Hirokawa, N.; Osaragi, T.

    2016-09-01

    The previous studies have been carried out on accessibility in daily life. However it is an important issue to improve the accessibility of emergency vehicles after a large earthquake. In this paper, we analyzed the accessibility of firefighters by using a microscopic simulation model immediately after a large earthquake. More specifically, we constructed the simulation model, which describes the property damage, such as collapsed buildings, street blockages, outbreaks of fires, and fire spreading, and the movement of firefighters from fire stations to the locations of fires in a large-scale earthquake. Using this model, we analyzed the influence of the street-blockage on the access time of firefighters. In case streets are blocked according to property damage simulation, the result showed the average access time is more than 10 minutes in the outskirts of the 23 wards of Tokyo, and there are some firefighters arrive over 20 minutes at most. Additionally, we focused on the alternative routes and proposed that volunteers collect information on street blockages to improve the accessibility of firefighters. Finally we demonstrated that access time of firefighters can be reduced to the same level as the case no streets were blocked if 0.3% of residents collected information in 10 minutes.

  7. Large Scale Coordination of Small Scale Structures

    NASA Astrophysics Data System (ADS)

    Kobelski, Adam; Tarr, Lucas A.; Jaeggli, Sarah A.; Savage, Sabrina

    2017-08-01

    Transient brightenings are ubiquitous features of the solar atmosphere across many length and energy scales, the most energetic of which manifest as large-class solar flares. Often, transient brightenings originate in regions of strong magnetic activity and create strong observable enhancements across wavelengths from X-ray to radio, with notable dynamics on timescales of seconds to hours.The coronal aspects of these brightenings have often been studied by way of EUV and X-ray imaging and spectra. These events are likely driven by photospheric activity (such as flux emergence) with the coronal brightenings originating largely from chromospheric ablation (evaporation). Until recently, chromospheric and transition region observations of these events have been limited. However, new observational capabilities have become available which significantly enhance our ability to understand the bi-directional flow of energy through the chromosphere between the photosphere and the corona.We have recently obtained a unique data set with which to study this flow of energy through the chromosphere via the Interface Region Imaging Spectrograph (IRIS), Hinode EUV Imaging Spectrometer (EIS), Hinode X-Ray Telescope (XRT), Hinode Solar Optical Telescope (SOT), Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA), SDO Helioseismic and Magnetic Imager (HMI), Nuclear Spectroscopic Telescope Array (NuStar), Atacama Large Millimeter Array (ALMA), and Interferometric BIdimensional Spectropolarimeter (IBIS) at the Dunn Solar Telescope (DST). This data set targets a small active area near disk center which was tracked simultaneously for approximately four hours. Within this region, many transient brightenings detected through multiple layers of the solar atmosphere. In this study, we combine the imaging data and use the spectra from EIS and IRIS to track flows from the photosphere (HMI, SOT) through the chromosphere and transition region (AIA, IBIS, IRIS, ALMA) into the corona

  8. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  9. Earthquake Monitoring at Different Scales with Seiscomp3

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Engels, F.

    2013-12-01

    In the last few years, the French National Network of Seismic Survey (BCSF-RENASS) had to modernize its old and aging earthquake monitoring system coming from an inhouse developement. After having tried and conducted intensive tests on several real time frameworks such as EarthWorm and Seiscomp3 we have finaly adopted in 2012 Seiscomp3. Our actual system runs with two pipelines in parallel: the first one is tuned at a global scale to monitor the world seismicity (for event's magnitude > 5.5) and the second one is tuned at a national scale for the monitoring of the metropolitan France. The seismological stations used for the "world" pipeline are coming mainly from Global Seismographic Network (GSN), whereas for the "national" pipeline the stations are coming from the RENASS short period network and from the RESIF broadband network. More recently we have started to tune seiscomp3 at a smaller scale to monitor in real time the geothermal project (a R&D program in Deep Geothermal Energy) in the North-East part of France. Beside the use of the real time monitoring capabilities of Seiscomp3 we have also used a very handy feature to playback a 4 month length dataset at a local scale for the Rambervillers earthquake (22/02/2003, Ml=5.4) leading to the build of roughly 2000 aftershock's detections and localisations.

  10. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  11. An evaluation of Health of the Nation Outcome Scales data to inform psychiatric morbidity following the Canterbury earthquakes.

    PubMed

    Beaglehole, Ben; Frampton, Chris M; Boden, Joseph M; Mulder, Roger T; Bell, Caroline J

    2017-06-01

    Following the onset of the Canterbury, New Zealand earthquakes, there were widespread concerns that mental health services were under severe strain as a result of adverse consequences on mental health. We therefore examined Health of the Nation Outcome Scales data to see whether this could inform our understanding of the impact of the Canterbury earthquakes on patients attending local specialist mental health services. Health of the Nation Outcome Scales admission data were analysed for Canterbury mental health services prior to and following the Canterbury earthquakes. These findings were compared to Health of the Nation Outcome Scales admission data from seven other large District Health Boards to delineate local from national trends. Percentage changes in admission numbers were also calculated before and after the earthquakes for Canterbury and the seven other large district health boards. Admission Health of the Nation Outcome Scales scores in Canterbury increased after the earthquakes for adult inpatient and community services, old age inpatient and community services, and Child and Adolescent inpatient services compared to the seven other large district health boards. Admission Health of the Nation Outcome Scales scores for Child and Adolescent community services did not change significantly, while admission Health of the Nation Outcome Scales scores for Alcohol and Drug services in Canterbury fell compared to other large district health boards. Subscale analysis showed that the majority of Health of the Nation Outcome Scales subscales contributed to the overall increases found. Percentage changes in admission numbers for the Canterbury District Health Board and the seven other large district health boards before and after the earthquakes were largely comparable with the exception of admissions to inpatient services for the group aged 4-17 years which showed a large increase. The Canterbury earthquakes were followed by an increase in Health of the Nation

  12. EVIDENCE FOR THREE MODERATE TO LARGE PREHISTORIC HOLOCENE EARTHQUAKES NEAR CHARLESTON, S. C.

    USGS Publications Warehouse

    Weems, Robert E.; Obermeier, Stephen F.; Pavich, Milan J.; Gohn, Gregory S.; Rubin, Meyer; Phipps, Richard L.; Jacobson, Robert B.

    1986-01-01

    Earthquake-induced liquefaction features (sand blows), found near Hollywood, S. C. , have yielded abundant clasts of humate-impregnated sand and sparse pieces of wood. Radiocarbon ages for the humate and wood provide sufficient control on the timing of the earthquakes that produced the sand blows to indicate that at least three prehistoric liquefaction-producing earthquakes (m//b approximately 5. 5 or larger) have occurred within the last 7,200 years. The youngest documented prehistoric earthquake occurred around 800 A. D. A few fractures filled with virtually unweathered sand, but no large sand blows, can be assigned confidently to the historic 1886 Charleston earthquake.

  13. Local near instantaneously dynamically triggered aftershocks of large earthquakes.

    PubMed

    Fan, Wenyuan; Shearer, Peter M

    2016-09-09

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks.

  14. Earthquake triggering by slow earthquake propagation: the case of the large 2014 slow slip event in Guerrero, Mexico.

    NASA Astrophysics Data System (ADS)

    Radiguet, M.; Perfettini, H.; Cotte, N.; Gualandi, A.; Kostoglodov, V.; Lhomme, T.; Walpersdorf, A.; Campillo, M.; Valette, B.

    2015-12-01

    Since their discovery nearly two decades ago, the importance of slow slip events (SSEs) in the processes of strain accommodation in subduction zones has been revealed. Nevertheless, the influence of slow aseismic slip on the nucleation of large earthquakes remains unclear. In this study, we focus on the Guerrero region of the Central American subduction zone in Mexico, where large SSEs have been observed since 1998, with a recurrence period of about 4 years, and produce aseismic slip in the Guerrero seismic gap. We investigate the large 2014 SSE (equivalent Mw=7.7), which initiated in early 2014 and lasted until the end of October 2014. During this time period, the 18 April Papanoa earthquake (Mw7.2) occurred on the western limit of the Guerrero gap. We invert the continuous GPS time series using the PCAIM (Principal Component Analysis Inversion Method) to assess the space and time evolution of slip on the subduction. To focus on the aseismic processes, we correct the cGPS time series from the co-seismic offsets. Our results show that the slow slip event initiated in the Guerrero gap region, as already observed during the previous SSEs. The Mw7.2 Papanoa earthquake occurred on the western limit of the region that was slipping aseismically before the earthquake. After the Papanoa earthquake, the aseismic slip rate increases. This geodetic signal consists of both the ongoing SSE and the postseismic (afterslip) response due to the Papanoa earthquake. The majority of the post-earthquake aseismic slip is concentrated downdip from the main earthquake asperity, but significant slip is also observed in the Guerrero gap region. Compared to previous SSEs in that region, the 2014 SSE produced a larger aseismic slip and the maximum slip is located downdip from the main brittle asperity corresponding to the Papanoa earthquake, a region that was not identified as active during the previous SSEs. Since the Mw 7.2 Papanoa earthquake occurred about 2 months after the onset of the

  15. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models.

    PubMed

    Landes, François P; Lippiello, E

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  16. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models

    NASA Astrophysics Data System (ADS)

    Landes, François P.; Lippiello, E.

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  17. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  18. Cosmology with Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Ho, Shirley; Cuesta, A.; Ross, A.; Seo, H.; DePutter, R.; Padmanabhan, N.; White, M.; Myers, A.; Bovy, J.; Blanton, M.; Hernandez, C.; Mena, O.; Percival, W.; Prada, F.; Ross, N. P.; Saito, S.; Schneider, D.; Skibba, R.; Smith, K.; Slosar, A.; Strauss, M.; Verde, L.; Weinberg, D.; Bachall, N.; Brinkmann, J.; da Costa, L. A.

    2012-01-01

    The Sloan Digital Sky Survey I-III surveyed 14,000 square degrees, and delivered over a trillion pixels of imaging data. I present cosmological results from this unprecedented data set which contains over a million galaxies distributed between redshift of 0.45 to 0.70. With such a large volume of data set, high precision cosmological constraints can be obtained given a careful control and understanding of observational systematics. I present a novel treatment of observational systematics and its application to the clustering signals from the data set. I will present cosmological constraints on dark components of the Universe and tightest constraints of the non-gaussianity of early Universe to date utilizing Large Scale Structure.

  19. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  20. The precursory fault width formation of large earthquakes

    NASA Astrophysics Data System (ADS)

    Takeda, Fumihide; Takeo, Makoto

    2010-03-01

    We collect earthquake (EQ) events for a region of about 5 degree mesh from a focus catalog of Japan with a regionally dependent magnitude window of M >= 3-3.5. The time history of the events draws a zigzagged trajectory in a five dimensional space of EQ epicenter, focal depth (DEP), inter-event interval (INT), and magnitude (MAG). Its components are the time series of the EQ source parameters for which time is the chronological event index. Each series has long-term memory and evidence of deterministic chaos. We thus use physical wavelets (P-Ws) to find the process producing large EQs. The P-Ws convert the moving-average of each series, its first and second order differences at any interval into the displacement, velocity and acceleration (A) in selective frequency region, respectively. The process starts with two unique different triple phase couplings of A on source parameters DEP, INT, and MAG, precursory to every large EQ's (M > about 6) throughout Japan. Each coupling then creates a linear DEP variation (W) on its series, which becomes comparable to the fault width of large EQ's. It suggests that the variation exerts the corresponding shear stress on a local plane in Earth's crust to form the fault plane of width W, rupturing a large EQ.

  1. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    PubMed

    duPont, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake.

  2. A bilinear source-scaling model for M-log a observations of continental earthquakes

    USGS Publications Warehouse

    Hanks, T.C.; Bakun, W.H.

    2002-01-01

    The Wells and Coppersmith (1994) M-log A data set for continental earthquakes (where M is moment magnitude and A is fault area) and the regression lines derived from it are widely used in seismic hazard analysis for estimating M, given A. Their relations are well determined, whether for the full data set of all mechanism types or for the subset of strike-slip earthquakes. Because the coefficient of the log A term is essentially 1 in both their relations, they are equivalent to constant stress-drop scaling, at least for M ??? 7, where most of the data lie. For M > 7, however, both relations increasingly underestimate the observations with increasing M. This feature, at least for strike-slip earthquakes, is strongly suggestive of L-model scaling at large M. Using constant stress-drop scaling (???? = 26.7 bars) for M ??? 6.63 and L-model scaling (average fault slip u?? = ??L, where L is fault length and ?? = 2.19 × 10-5) at larger M, we obtain the relations M = log A + 3.98 ?? 0.03, A ??? 537 km2 and M = 4/3 log A + 3.07 ?? 0.04, A > 537 km2. These prediction equations of our bilinear model fit the Wells and Coppersmith (1994) data set well in their respective ranges of validity, the transition magnitude corresponding to A = 537 km2 being M = 6.71.

  3. Methane emissions on large scales

    NASA Astrophysics Data System (ADS)

    Beswick, K. M.; Simpson, T. W.; Fowler, D.; Choularton, T. W.; Gallagher, M. W.; Hargreaves, K. J.; Sutton, M. A.; Kaye, A.

    Two separate studies have been undertaken to improve estimates of methane emissions on a landscape scale. The first study took place over a palsa mire in northern Finland in August 1995. A tethered balloon and a tunable diode laser were used to measure profiles of methane in the nocturnal boundary layer. Using a simple box method or the flux gradient technique fluxes ranging from 18.5 to 658 μmol m -2 h -1 were calculated. The large fluxes may be caused by advection of methane pockets across the measurement site, reflecting the heterogeneous nature of methane source strengths in the surrounding area. Under suitable conditions, comparison with nearby ground-based eddy-correlation results suggested that the balloon techniques could successfully measure fluxes on field scales. The second study was carried out by the NERC Scientific Services Atmospheric Research Airborne Support Facility using the Hercules C130 operated by the United Kingdom Meteorological Research Flight. A flight path around the northern coastline of Britain under steady West-East wind conditions enabled the measurement of methane concentrations up- and down-wind of northern Britain. Using a simple one-dimensional, constant-source diffusion model, the difference between the upwind and downwind concentrations was accounted for by methane emission from the surface. The contribution to methane emissions from livestock was also modelled. Modelled non-agricultural methane emissions decreased with increasing latitude with fluxes in northern England being a factor of 4 greater than those in northern Scotland. Since the only major methane source in northern Scotland was peat bogs, these results indicated that emissions over northern England were dominated by anthropogenic sources. Emissions from livestock accounted for 12% of the total flux over northern England, decreasing to 4% in southern Scotland and becoming negligible in northern Scotland. The total methane flux over northern Scotland was consistent

  4. Unusual behaviour of cows prior to a large earthquake

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano; Freund, Friedemann; Grant, Rachel

    2013-04-01

    Unusual behaviour of domestic cattle before earthquakes has been reported for centuries, and often relates to cattle becoming excited, vocal, aggressive or attempting to break free of tethers and restraints. Cattle have also been reported to move to higher or lower ground before earthquakes. Here, we report unusual movements of domestic cows 2 days prior to the Marche-Umbria (M=6) earthquake in 1997. Cows moved down from their usual summer pastures in the hills and were seen in the streets of a nearby town, a highly unusual occurrence. We discuss this in the context of positive holes and air ionisation as proposed by Freund's unified theory of earthquake precursors.

  5. A scaling relationship between AE and natural earthquakes

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, N.; Kawakata, H.; Takahashi, N.

    2013-12-01

    seismic moments and the corner frequencies by grid search. The magnitude of AE events were estimated between -8 to -7. As a result, the relationship between the seismic moment and the corner frequency of AE also satisfied the same scaling relationship as shown for natural earthquakes. This indicates that AE in rock samples can be regarded as micro size earthquake. This finding shows the possibility to understand the developing processes of natural earthquake from laboratory experiments.

  6. Direct and array observations for near source dynamic strain during large earthquakes

    NASA Astrophysics Data System (ADS)

    Huang, Bor-Shouh; Huang, Win-Gee; Lin, Chin-Jen

    2017-04-01

    The seismic ground motions from the 1999 Chi-Chi Taiwan earthquake (ML=7.6) and it large aftershocks were well recorded by a dense seismic array (named the Hualien Large Scale Seismic Test, HLSST) in near source distances. The HLSST site was situated in the the site of Hualien Veteran's Marble Plant. It included one scaled down reinforced concrete cylindrical containment model (1/4 scale). The radius of this cylindrical model is of about 5.4 meters. The instrumentation of this program consisted of forty-two stations. They were fifteen surface accelerometers, twelve downhole accelerometers and fifteen containment structure response accelerometers. The fifteen free-field stations were installed at three arms. The twelve downhole accelerometers were distributed beneath this array. One delta ground strain gauge was commonly installed in this site and well recorded those events. In this study, we inferred ground strains by a least-squares fit of array translational ground-motion data using the method proposed by Spudich et al. (1995) and Spudich and Fletcher (2008) and the same as from strain gauge records. We analyzed the main shock and November 1 offshore event for ground rotations and the same as ground strains. We will discuss the relationship of observed spatial ground deformations with source rupture processes and onsite ground translations. We hope to discuss its implications also about earthquake engineering applications.

  7. A search for long-term periodicities in large earthquakes of southern and coastal central California

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1990-01-01

    It has been occasionally suggested that large earthquakes may follow the 8.85-year and 18.6-year lunar-solar tidal cycles and possibly the approximately 11-year solar activity cycle. From a new study of earthquakes with magnitudes greater than 5.5 in southern and coastal central California during the years 1855-1983, it is concluded that, at least in this selected area of the world, no statistically significant long-term periodicities in earthquake frequency occur. The sample size used is about twice that used in comparable earlier studies of this region, which concentrated on large earthquakes.

  8. Large-Scale Sequence Comparison.

    PubMed

    Lal, Devi; Verma, Mansi

    2017-01-01

    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  9. Hydrologic changes induced by large earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chia, Y.; Liu, C. Y.

    2016-12-01

    Hydrological changes induced by earthquakes have been observed worldwide. As a dense network of monitoring wells and stream gauges has been well established in Taiwan, it is possible to acquire comprehensive data of co-seismic and post-seismic hydrological changes during earthquakes, including distant earthquakes. The spatial distribution co-seismic changes in groundwater level induced by 10 earthquakes of magnitude greater than or equal to 6.4 between 1999 and 2016 revealed crustal deformation during earthquakes. Generally most changes induced by local earthquakes were co-seismic rises, indicating the dominance of tectonic compression as a result of convergent plate movement. Co-seismic falls often appeared in the southwestern plain and the areas adjacent to the seismogenic fault. In many cases, the co-seismic rise and the co-seismic fall in groundwater level were recorded in wells of different depths at a monitoring station. Such a phenomenon revealed the variation of strain in the vertical direction. The post-seismic changes, resulting primarily from groundwater flow, could be an indicator of the hydrogeological characteristics of the aquifer tapped by the monitoring wells. In rare cases, they were caused by elastic response to the post-seismic deformation. While groundwater-level changes occurred co-seismically, streamflow changes usually happened shortly after major earthquakes. Streamflow changes have generally been attributed to changes in permeability due to rock fracturing by seismic shaking. These changes were mostly increases; however, an abrupt streamflow decrease lasing for more than 8 months was recorded near the epicenter of the 1999 Chi-Chi earthquake. Pre-earthquake hydrological anomalies have been recorded; however, further studies on earthquake hydrology are needed to enhance our understanding of these anomalies.

  10. Oceanic transform fault earthquake nucleation process and source scaling relations - A numerical modeling study with rate-state friction (Invited)

    NASA Astrophysics Data System (ADS)

    Liu, Y.; McGuire, J. J.; Behn, M. D.

    2013-12-01

    We use a three-dimensional strike-slip fault model in the framework of rate and state-dependent friction to investigate earthquake behavior and scaling relations on oceanic transform faults (OTFs). Gabbro friction data under hydrothermal conditions are mapped onto OTFs using temperatures from (1) a half-space cooling model, and (2) a thermal model that incorporates a visco-plastic rheology, non-Newtonian viscous flow and the effects of shear heating and hydrothermal circulation. Without introducing small-scale frictional heterogeneities on the fault, our model predicts that an OTF segment can transition between seismic and aseismic slip over many earthquake cycles, consistent with the multimode hypothesis for OTF ruptures. The average seismic coupling coefficient χ is strongly dependent on the ratio of seismogenic zone width W to earthquake nucleation size h*; χ increases by four orders of magnitude as W/h* increases from ~ 1 to 2. Specifically, the average χ = 0.15 +/- 0.05 derived from global OTF earthquake catalogs can be reached at W/h* ≈ 1.2-1.7. The modeled largest earthquake rupture area is less than the total seismogenic area and we predict a deficiency of large earthquakes on long transforms, which is also consistent with observations. Earthquake magnitude and distribution on the Gofar (East Pacific Rise) and Romanche (equatorial Mid-Atlantic) transforms are better predicted using the visco-plastic model than the half-space cooling model. We will also investigate how fault gouge porosity variation during an OTF earthquake nucleation phase may affect the seismic wave velocity structure, for which up to 3% drop was observed prior to the 2008 Mw6 Gofar earthquake.

  11. Determining the Uncertainty Range of Coseismic Stress Drop of Large Earthquakes Using Finite Fault Inversion

    NASA Astrophysics Data System (ADS)

    Adams, M.; Ji, C.; Twardzik, C.; Archuleta, R. J.

    2015-12-01

    A key component in understanding the physics of earthquakes is the resolution of the state of stress on the fault before, during and after the earthquake. A large earthquake's average stress drop is the first order parameter for this task but is still poorly constrained, especially for intermediate and deep events. Classically, the average stress drop is estimated using the corner frequency of observed seismic data. However a simple slip distribution is implicitly assumed; this assumed distribution is often not appropriate for large earthquakes. The average stress drop can be calculated using the inverted finite fault slip model. However, conventional finite fault inversion methods do not directly invert for on-fault stress change; thus it is unclear whether models with significantly different stress drops can match the observations equally well. We developed a new nonlinear inversion to address this concern. The algorithm searches for the solution matching the observed seismic and geodetic data under the condition that the average stress drop is close to a pre-assigned value. We perform inversions with different pre-assigned stress drops to obtain the relationship between the average stress drop of the inverted slip model and the minimum waveform misfit. As an example, we use P and SH displacement waveforms recorded at teleseismic distances from the 2014 Mw 7.9 Rat Island intermediate depth earthquake to determine its average stress drop. Earth responses up to 2 Hz are calculated using an FK algorithm and the PREM velocity structure. Our preliminary analysis illustrates that with this new approach, we are able to define the lower bound of the average stress drop but fail to constrain its upper bound. The waveform misfit associated with the inverted model increases quickly as pre-assigned stress drop decreases from 3 MPa to 0.5 MPa. But the misfit varies negligibly when the pre-assigned stress drop increases from 4.0 MPa to 50 MPa. We notice that the fine-scale

  12. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  13. Systematic Detection of Remotely Triggered Seismicity in Africa Following Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Ayorinde, A. O.; Peng, Z.; Yao, D.; Bansal, A. R.

    2016-12-01

    It is well known that large distant earthquakes can trigger micro-earthquakes/tectonic tremors during or immediately following their surface waves. Globally, triggered earthquakes have been mostly found in active plate boundary regions. It is not clear whether they could occur within stable intraplate regions in Africa as well as the active East African Rift Zone. In this study we conduct a systematic study of remote triggering in Africa following recent large earthquakes, including the 2004 Mw9.1 Sumatra and 2012 Mw8.6 Indian Ocean earthquakes. In particular, the 2012 Indian Ocean earthquake is the largest known strike slip earthquake and has triggered a global increase of magnitude larger than 5.5 earthquakes as well as numerous micro-earthquakes/tectonic tremors around the world. The entire Africa region was examined for possible remotely triggered seismicity using seismic data downloaded from the Incorporated Research Institutes for Seismology (IRIS) Data Management Center (DMC) and GFZ German Research Center for Geosciences. We apply a 5-Hz high-pass-filter to the continuous waveforms and visually identify high-frequency signals during and immediately after the large amplitude surface waves. Spectrograms are computed as additional tools to identify triggered seismicities and we further confirm them by statistical analysis comparing the high-frequency signals before and after the distant mainshocks. So far we have identified possible triggered seismicity in Botswana and northern Madagascar. This study could help to understand dynamic triggering in diverse tectonic settings of the African continent.

  14. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  15. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  16. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  17. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  18. Very short-term earthquake precursors from GPS signal interference: Case studies on moderate and large earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, Yu-Lien; Cheng, Kai-Chien; Wang, Wei-Hau; Yu, Shui-Beih

    2016-04-01

    We set up a GPS network with 17 Continuous GPS (CGPS) stations in southwestern Taiwan to monitor real-time crustal deformation. We found that systematic perturbations in GPS signals occurred just a few minutes prior to the occurrence of several moderate and large earthquakes, including the recent 2013 Nantou (ML = 6.5) and Rueisuei (ML = 6.4) earthquakes in Taiwan. The anomalous pseudorange readings were several millimeters higher or lower than those in the background time period. These systematic anomalies were found as a result of interference of GPS L-band signals by electromagnetic emissions (EMs) prior to the mainshocks. The EMs may occur in the form of harmonic or ultra-wide-band radiation and can be generated during the formation of Mode I cracks at the final stage of earthquake nucleation. We estimated the directivity of the likely EM sources by calculating the inner product of the position vector from a GPS station to a given satellite and the vector of anomalous ground motions recorded by the GPS. The results showed that the predominant inner product generally occurred when the satellite was in the direction either toward or away from the epicenter with respect to the GPS network. Our findings suggest that the GPS network may serve as a powerful tool to detect very short-term earthquake precursors and presumably to locate a large earthquake before it occurs.

  19. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  20. The 2002 Denali fault earthquake, Alaska: a large magnitude, slip-partitioned event.

    PubMed

    Eberhart-Phillips, Donna; Haeussler, Peter J; Freymueller, Jeffrey T; Frankel, Arthur D; Rubin, Charles M; Craw, Patricia; Ratchkovski, Natalia A; Anderson, Greg; Carver, Gary A; Crone, Anthony J; Dawson, Timothy E; Fletcher, Hilary; Hansen, Roger; Harp, Edwin L; Harris, Ruth A; Hill, David P; Hreinsdóttir, Sigrun; Jibson, Randall W; Jones, Lucile M; Kayen, Robert; Keefer, David K; Larsen, Christopher F; Moran, Seth C; Personius, Stephen F; Plafker, George; Sherrod, Brian; Sieh, Kerry; Sitar, Nicholas; Wallace, Wesley K

    2003-05-16

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  1. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  2. Calibration of the landsliding numerical model SLIPOS and prediction of the seismically induced erosion for several large earthquakes scenarios

    NASA Astrophysics Data System (ADS)

    Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark

    2016-04-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.

  3. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  4. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake

    PubMed Central

    Grant Ludwig, Lisa; Parker, Jay W.; Rundle, John B.; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott

    2015-01-01

    Abstract Tectonic motion across the Los Angeles region is distributed across an intricate network of strike‐slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933 M6.4 Long Beach and 1994 M6.7 Northridge events. Here we show that Los Angeles regional thrust, strike‐slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north‐south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left‐lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still‐locked deeper structures. A future M6.1–6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping. PMID:27981074

  5. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake.

    PubMed

    Donnellan, Andrea; Grant Ludwig, Lisa; Parker, Jay W; Rundle, John B; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott

    2015-09-01

    Tectonic motion across the Los Angeles region is distributed across an intricate network of strike-slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933 M6.4 Long Beach and 1994 M6.7 Northridge events. Here we show that Los Angeles regional thrust, strike-slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north-south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left-lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still-locked deeper structures. A future M6.1-6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping.

  6. The quest for better quality-of-life - learning from large-scale shaking table tests

    NASA Astrophysics Data System (ADS)

    Nakashima, M.; Sato, E.; Nagae, T.; Kunio, F.; Takahito, I.

    2010-12-01

    Earthquake engineering has its origins in the practice of “learning from actual earthquakes and earthquake damages.” That is, we recognize serious problems by witnessing the actual damage to our structures, and then we develop and apply engineering solutions to solve these problems. This tradition in earthquake engineering, i.e., “learning from actual damage,” was an obvious engineering response to earthquakes and arose naturally as a practice in a civil and building engineering discipline that traditionally places more emphasis on experience than do other engineering disciplines. But with the rapid progress of urbanization, as society becomes denser, and as the many components that form our society interact with increasing complexity, the potential damage with which earthquakes threaten the society also increases. In such an era, the approach of ”learning from actual earthquake damages” becomes unacceptably dangerous and expensive. Among the practical alternatives to the old practice is to “learn from quasi-actual earthquake damages.” One tool for experiencing earthquake damages without attendant catastrophe is the large shaking table. E-Defense, the largest one we have, was developed in Japan after the 1995 Hyogoken-Nanbu (Kobe) earthquake. Since its inauguration in 2005, E-Defense has conducted over forty full-scale or large-scale shaking table tests, applied to a variety of structural systems. The tests supply detailed data on actual behavior and collapse of the tested structures, offering the earthquake engineering community opportunities to experience and assess the actual seismic performance of the structures, and to help society prepare for earthquakes. Notably, the data were obtained without having to wait for the aftermaths of actual earthquakes. Earthquake engineering has always been about life safety, but in recent years maintaining the quality of life has also become a critical issue. Quality-of-life concerns include nonstructural

  7. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    PubMed Central

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  8. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change |Δ| 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  9. Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, Tom

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  10. Scaling earthquake ground motions in western Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    Akinci, Aybige; D'Amico, Sebastiano; Malagnini, Luca; Mercuri, Alessia

    In this study, we provide a complete description of the ground-motion characteristics of the western Anatolia region of Turkey. The attenuation of ground motions with distance and the variability in excitation with magnitude are parameterized using three-component 0.25-10.0 Hz earthquake ground motions at distances of 15-250 km. The data set is comprised of more than 11,600 three-component seismograms from 902 regional earthquakes of local magnitude (ML) 2.5-5.8, recorded during the Western Anatolia Seismic Recording Experiment (WASRE) between November 2002 and October 2003. We used regression analysis to relate the logarithm of measured ground motion to the excitation, site, and propagation effects. Instead of trying to reproduce the details of the high-frequency ground motion in the time domain, we use a source model and a regional scaling law to predict the spectral shape and amplitudes of ground motion at various source-receiver distances. We fit a regression to the peak values of narrow bandpass filtered ground velocity time histories, and root mean square and RMS-average Fourier spectral amplitudes for a range of frequencies to define regional attenuation functions characterized by piece-wise linear geometric spreading (in log-log space) and a frequency-dependent crustal Q(f). An excitation function is also determined, which contains the competing effects of an effective stress parameter Δσ and a high-frequency attenuation term exp(-πκf). The anelastic attenuation coefficient for the entire region is given by Q(f) = 180f0.55. The duration of motion for each record is defined as the value that yields the observed relationship between time-domain and spectral-domain amplitudes, according to random process theory. Anatolian excitation spectra are calibrated for our empirical results by using a Brune model with a stress drop of 10 MPa for the largest event in our data set (Mw 5.8) and a near-surface attenuation parameter of κ = 0.045 s. These quantities

  11. Long-Term Prediction of Large Earthquakes: When Does Quasi-Periodic Behavior Occur?

    NASA Astrophysics Data System (ADS)

    Sykes, L. R.

    2003-12-01

    I argue that the prediction of large earthquakes for time scales of a few decades is possible for a number of fault segments along transform and subduction plate boundaries. A key parameter in ascertaining if forecasting is feasible is the size of the coefficient of variation, CV, the standard deviation of inter-event times of large earthquakes that rupture all or most of a given fault segment divided by T, the average repeat time. I address only large events, ones that rupture all or most of the downdip width of the seismogenic zone where velocity-weakening behavior occurs. Historic and paleoseismic data indicate that the segment that ruptured in the great 1946 Nankaido, Japan, earthquake broke 9 times in the previous 1060 years yielding T=118 years and CV=0.16. The adjacent zone that broke in 1944 exhibits similar behavior as does the Copper River delta, the site of 8 paleoseismic events dated by Plafker and Rubin (1994) above the rupture zone of the 1964 Alaska earthquake. Lindh (preceding abstract) finds that many fault segments in California have similar small values of CV. Paleoseismic data for inter-event times at Pallet Creek and Wrightwood, however, indicate a large CV. Those sites at situated along the San Andreas fault near the end of the 1857 rupture zone where slip was much smaller than in the Carrizo plain, rupture in large events to the northwest and southeast overlap and deformation is multibranched as plate motion is transferred in part to the San Jacinto fault. Plate boundary slip is confined to narrow zones along the 1944 and 1946 segments of the Nankai trough but is more diffuse in the Tokai-Suruga Bay region where the Izu Peninsula is colliding with the rest of Honshu and repeat times appear to be longer (and CV perhaps is larger). Dates of uplifted terraces likely give repeat times of inter-plate thrust events that are too long and large estimates of CV since imbricate faults within the upper plate that generate terraces do not rupture in

  12. The characteristic of the building damage from historical large earthquakes in Kyoto

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  13. Scaling relation between earthquake magnitude and the departure time from P-wave similar growth: Application to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Noda, S.; Ellsworth, W. L.

    2016-12-01

    The magnitude of an earthquake (M) for earthquake early warning (EEW), is typically estimated from the P-wave displacement using a relation between M and displacement amplitude (that is, a ground motion prediction equation). In the conventional approach the final M cannot be estimated until the peak amplitude is observed. To overcome this technical limitation, we introduce a new scaling relation between M and a characteristic of initial P-wave displacement. We use Japanese K-NET data from 150 events with 4.5 ≤ Mw ≤ 9.0 and hypocentral distance (R) less than 200 km. The data are binned by 0.1 magnitude units and by 25 km in hypocentral distance. We retain bins with at least 5 observations and measure the average of absolute displacement (AAD) in each bin as a function of time. We find that there is no statistical difference in AAD between smaller and larger earthquakes for early times (< 0.2 s), suggesting that the observed P wave begins in a similar way. However, AAD for smaller events departs from the similarity sooner than large events. Consequntly, we define the departure time (Tdp) as the first decrease in absolute displacement after the P onset. For the K-NET data the relation between Mw and Tdp is Mw = 2.29 × logTdp + 5.95 in the magnitude range of 4.5 ≤ Mw ≤ 7. Note that Tdp is much shorter than the typical source duration. This suggests that it is unnecessary to wait for the arrival of the peak amplitude to estimate the final M because the displacement scales with the final M after Tdp. Based on this observation, we derive a new estimator for M based on AAD measurements made up to time Tdp(M). Retrospective application of this equation provides faster determination of M than the conventional approach without loss of accuracy. We conclude that the proposed approach is useful to reduce the blind zone for EEW.

  14. Quiet zone within a seismic gap near western Nicaragua: Possible location of a future large earthquake

    USGS Publications Warehouse

    Harlow, D.H.; White, R.A.; Cifuentes, I.L.; Aburto, Q.A.

    1981-01-01

    A 5700-square-kilometer quiet zone occurs in the midst of the locations of more than 4000 earthquakes off the Pacific coast of Nicaragua. The region is indicated by the seismic gap technique to be a likely location for an earthquake of magnitude larger than 7. The quiet zone has existed since at least 1950; the last large earthquake originating from this area occurred in 1898 and was of magnitude 7.5. A rough estimate indicates that the magnitude of an earthquake rupturing the entire quiet zone could be as large as that of the 1898 event. It is not yet possible to forecast a time frame for the occurrence of such an earthquake in the quiet zone. Copyright ?? 1981 AAAS.

  15. Some Considerations on a Large Landslide at the Left Bank of the Aratozawa Dam Caused by the 2008 Iwate-Miyagi Intraplate Earthquake

    NASA Astrophysics Data System (ADS)

    Aydan, Ömer

    2016-06-01

    The scale and impact of rock slope failures are very large and the form of failure differs depending upon the geological structures of slopes. The 2008 Iwate-Miyagi intraplate earthquake induced many large-scale slope failures, despite the magnitude of the earthquake being of intermediate scale. Among large-scale slope failures, the landslide at the left bank of the Aratozawa Dam site is of great interest to specialists of rock mechanics and rock engineering. Although the slope failure was of planar type, the direction of sliding was luckily towards the sub-valley, so that the landslide did not cause great tsunami-like motion of reservoir fluid. In this study, the author attempts to describe the characteristics of the landslide, strong motion and permanent ground displacement induced by the 2008 Iwate-Miyagi intraplate earthquake, which had great effects on the triggering and evolution of the landslide.

  16. Gravity Wave Disturbances in the F-Region Ionosphere Above Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bruff, Margie

    The direction of propagation, duration and wavelength of gravity waves in the ionosphere above large earthquakes were studied using data from the Super Dual Auroral Radar Network. Ground scatter data were plotted versus range and time to identify gravity waves as alternating focused and de-focused regions of radar power in wave-like patterns. The wave patterns before and after earthquakes were analyzed to determine the directions of propagation and wavelengths. Conditions were considered 48 hours before and after each identified disturbances to exclude waves from geomagnetic activity. Gravity waves were found travelling away from the epicenter before all six earthquakes for which data were available and after four of the six earthquakes. Gravity waves travelled in at least two directions away from the epicenter in all cases, and even stronger patterns were found for two earthquakes. Waves appeared, on average, 4 days before, persisting 2-3 hours, and 1-2 days after earthquakes, persisting 4-6 hours. Most wavelengths were between 200-300 km. We show a possible correlation between magnitude and depth of earthquakes and gravity wave patterns, but study of more earthquakes is required. This study provides a better understanding of the causes of ionospheric gravity wave disturbances and has potential applications for predicting earthquakes.

  17. Depths of large earthquakes determined from long-period Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Kanamori, Hiroo

    1988-05-01

    The depths and source mechanisms of nine large shallow earthquakes were determined from long-period (150 to 300 s) Rayleigh waves recorded by the Global Digital Seismograph Network (GDSN) and International Deployment of Accelerometers (IDA) networks. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and excitation. We tested various models of source finiteness, Q, group velocity, and excitation in the determination of earthquake depths. In order to determine the depth of large earthquakes from long-period surface waves, source-finiteness effects must be corrected using adequate models. The depth estimates obtained using the Q model of Dziewonski and Stein (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Diziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones, Regan and Anderson's average ocean model is considered most appropriate. Our depth estimates are in general consistent with the Harvard centroid-moment tensor (CMT) solutions. The centroid depths and their 90% confidence intervals (numbers in parentheses) determined by the Student's t test are Columbia-Ecuador earthquake (December 12, 1979), d=11 km (9, 24 km); Santa Cruz Island earthquake (July 17, 1980), d=36 km (18, 46 km); Samoa earthquake (September 1, 1981), d=15 km (9, 26 km); Playa Azul, Mexico, earthquake (October 25, 1981), d=41 km (28, 49 km); El Salvador earthquake (June 19, 1982), d=49 km (41, 55 km); New

  18. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  19. The AD 365 earthquake: high resolution tsunami inundation for Crete and full scale simulation exercise

    NASA Astrophysics Data System (ADS)

    Kalligeris, N.; Flouri, E.; Okal, E.; Synolakis, C.

    2012-04-01

    In the eastern Mediterranean, historical and archaeological records document major earthquake and tsunami events in the past 2000 year (Ambraseys and Synolakis, 2010). The 1200km long Hellenic Arc has allegedly caused the strongest reported earthquakes and tsunamis in the region. Among them, the AD 365 and AD 1303 tsunamis have been extensively documented. They are likely due to ruptures of the Central and Eastern segments of the Hellenic Arc, respectively. Both events had widespread impact due to ground shaking, and e triggered tsunami waves that reportedly affected the entire eastern Mediterranean. The seismic mechanism of the AD 365 earthquake, located in western Crete, has been recently assigned a magnitude ranging from 8.3 to 8.5 by Shaw et al., (2008), using historical, sedimentological, geomorphic and archaeological evidence. Shaw et al (2008) have inferred that such large earthquakes occur in the Arc every 600 to 800 years, with the last known the AD 1303 event. We report on a full-scale simulation exercise that took place in Crete on 24-25 October 2011, based on a scenario sufficiently large to overwhelm the emergency response capability of Greece and necessitating the invocation of the Monitoring and Information Centre (MIC) of the EU and triggering help from other nations . A repeat of the 365 A.D. earthquake would likely overwhelm the civil defense capacities of Greece. Immediately following the rupture initiation it will cause substantial damage even to well-designed reinforced concrete structures in Crete. Minutes after initiation, the tsunami generated by the rapid displacement of the ocean floor would strike nearby coastal areas, inundating great distances in areas of low topography. The objective of the exercise was to help managers plan search and rescue operations, identify measures useful for inclusion in the coastal resiliency index of Ewing and Synolakis (2011). For the scenario design, the tsunami hazard for the AD 365 event was assessed for

  20. Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes

    NASA Astrophysics Data System (ADS)

    Itaba, S.

    2016-12-01

    The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.

  1. Characterizing Landslide and River Bed Sediments Grain Size after a Large Earthquake with Insight into Post-earthquake Sediment Dynamics

    NASA Astrophysics Data System (ADS)

    Li, G.; West, A. J.; Densmore, A.; Hammond, D. E.; Jin, Z.; Zhang, F.; Wang, J.; Hilton, R. G.

    2015-12-01

    Landslides are important fluvial sediment sources, often characterized by distinct grain size distributions that may be useful for tracking sediment transport and channel bed evolution and understanding associated size fractionation. Landslide grain sizes may also leave an imprint of past triggering events in sedimentary archives. Here we investigate the grain size characteristics of landslide and river bed sediments in the Min Jiang river basin draining the Longmen Shan range, in Sichuan, China. The large number of landslides triggered by the 2008 Mw7.9 Wenchuan earthquake provides a natural experiment for exploring sediment transport and grain size evolution. We report new grain size information for landslide and river bed sediments from field work (pit digging, sieving and photographing) and lab work (wet sieving, photo analysis and statistical processing). We constrain the grain size of sediment sources by combining previously published grain size data from >100 earthquake-triggered landslides with new data collected from river bed sediments from several low-order, small catchments. We track the spatial evolution of grain size in river sediment across the Longmen Shan range, from the upper Min Jiang with little influence from co-seismic landslides to the epicentral region with intensive landsliding, and we explore how landslides modulate the grain size signal carried by river sediment. Overall, this work contributes a key dataset for modeling post-earthquake sediment transport and sheds light on how sediment grain sizes evolve in response to a large earthquake.

  2. Lake deposits record evidence of large post-1505 AD earthquakes in western Nepal

    NASA Astrophysics Data System (ADS)

    Ghazoui, Z.; Bertrand, S.; Vanneste, K.; Yokoyama, Y.; Van Der Beek, P.; Nomade, J.; Gajurel, A.

    2016-12-01

    According to historical records, the last large earthquake that ruptured the Main Frontal Thrust (MFT) in western Nepal occurred in 1505 AD. Since then, no evidence of other large earthquakes has been found in historical records or geological archives. In view of the catastrophic consequences to millions of inhabitants of Nepal and northern India, intense efforts currently focus on improving our understanding of past earthquake activity and complement the historical data on Himalayan earthquakes. Here we report a new record, based on earthquake-triggered turbidites in lakes. We use lake sediment records from Lake Rara, western Nepal, to reconstruct the occurrence of seismic events. The sediment cores were studied using a multi-proxy approach combining radiocarbon and 210Pb chronologies, physical properties (X-ray computerized axial tomography scan, Geotek multi-sensor core logger), high-resolution grain size, inorganic geochemistry (major elements by ITRAX XRF core scanning) and bulk organic geochemistry (C, N concentrations and stable isotopes). We identified several sequences of dense and layered fine sand mainly composed of mica, which we interpret as earthquake-triggered turbidites. Our results suggest the presence of a synchronous event between the two lake sites correlated with the well-known 1505 AD earthquake. In addition, our sediment records reveal five earthquake-triggered turbidites younger than the 1505 AD event. By comparison with historical archives, we relate one of those to the 1833 AD MFT rupture. The others may reflect successive ruptures of the Western Nepal Fault System. Our study sheds light on events that have not been recorded in historical chronicles. Those five MMI>7 earthquakes permit addressing the problem of missing slip on the MFT in western Nepal and reevaluating the risk of a large earthquake affecting western Nepal and North India.

  3. Coseismic Slip Distributions of Great or Large Earthquakes in the Northern Japan to Kurile Subduction Zone

    NASA Astrophysics Data System (ADS)

    Harada, T.; Satake, K.; Ishibashi, K.

    2011-12-01

    Slip distributions of great and large earthquakes since 1963 along the northern Japan and Kuril trenches are examined to study the recurrence of interplate, intraslab and outer-rise earthquakes. The main findings are that the large earthquakes in 1991 and 1995 reruptured the 1963 great Urup earthquake source, and the 2006, 2007 and 2009 Simshir earthquakes were all different types. We also identify three seismic gaps. The northern Japan to southern Kurile trenches have been regarded as a typical subduction zone with spatially and temporally regular recurrence of great (M>8) interplate earthquakes. The source regions were grouped into six segments by Utsu (1972; 1984). The Headquarters for Earthquake Research Promotion of the Japanese government (2004) divided the southern Kurile subduction zone into four regions and evaluated future probabilities of great interplate earthquakes. Besides great interplate events, however, many large (M>7) interplate, intraslab, outer-rise and tsunami earthquakes have also occurred in this region. Harada, Ishibashi, and Satake (2010, 2011) depicted the space-time pattern of M>7 earthquakes along the northern Japan to Kuril trench, based on the relocated mainshock-aftershock distributions of all types of earthquakes occurred since 1913. The space-time pattern is more complex than that had been considered conventionally. Each region has been ruptured by a M8-class interplate earthquake or by multiple M7-class events. In this study, in order to examine more detail space pattern, or rupture areas, of M>7 earthquakes since 1963 (WWSSN waveform data have been available since this year), we estimated cosiesmic slip distributions by the Kikuchi and Kanamori's (2003) teleseismic body wave inversion method. The WWSSN waveform data were used for earthquakes before 1990, and digital teleseismic waveform data compiled by the IRIS were used for events after 1990. Main-shock hypocenters that had been relocated by our previous study were used as

  4. Interseismic Coupling Models and their interactions with the Sources of Large and Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Chlieh, M.; Perfettini, H.; Avouac, J. P.

    2009-04-01

    Recent observations of heterogeneous strain build up reported from subduction zones and seismic sources of large and great interplate earthquakes indicate that seismic asperities are probably persistent features of the megathrust. The Peru Megathrust produce recurrently large seismic events like the 2001 Mw 8.4, Arequipa earthquake or the 2007 Mw 8.0, Pisco earthquake. The peruvian subduction zone provide an exceptional opportunity to understand the eventual relationship between interseismic coupling, large megathrust ruptures and the frictional properties of the megathrust. An emerging concept is a megathrust with strong locked fault patches surrounded by aseismic slip. The 2001, Mw 8.4 Arequipa earthquake ruptured only the northern portion of the patch that had ruptured already during the great 1868 Mw~8.8 earthquake and that had remained locked in the interseismic period. The 2007 Mw 8.0 Pisco earthquake ruptured the southern portion of the 1746 Mw~8.5 event. The moment released in 2007 amounts to only a small fraction of the deficit of moment that had accumulated since the 1746 great earthquake. Then, the potential for future large megathrust events in Central and Southern Peru area remains large. These recent earthquakes indicate that a same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or jointly to produce a larger rupture. The spatial distribution of frictional properties of the megathrust could be the cause for a more complex earthquakes sequence from one seismic cycle to another. The subduction of geomorphologic structure like the Nazca ridge could be the cause for a lower coupling there.

  5. The evolution of hillslope strength following large earthquakes

    NASA Astrophysics Data System (ADS)

    Brain, Matthew; Rosser, Nick; Tunstall, Neil

    2017-04-01

    Earthquake-induced landslides play an important role in the evolution of mountain landscapes. Earthquake ground shaking triggers near-instantaneous landsliding, but has also been shown to weaken hillslopes, preconditioning them for failure during subsequent seismicity and/or precipitation events. The temporal evolution of hillslope strength during and following primary seismicity, and if and how this ultimately results in failure, is poorly constrained due to the rarity of high-magnitude earthquakes and limited availability of suitable field datasets. We present results obtained from novel geotechnical laboratory tests to better constrain the mechanisms that control strength evolution in Earth materials of differing rheology. We consider how the strength of hillslope materials responds to ground-shaking events of different magnitude and if and how this persists to influence landslide activity during interseismic periods. We demonstrate the role of stress path and stress history, strain rate and foreshock and aftershock sequences in controlling the evolution of hillslope strength and stability. Critically, we show how hillslopes can be strengthened rather than weakened in some settings, challenging conventional assumptions. On the basis of our laboratory data, we consider the implications for earthquake-induced geomorphic perturbations in mountain landscapes over multiple timescales and in different seismogenic settings.

  6. What controls the location where large earthquakes nucleate along the North Anatolian Fault ?

    NASA Astrophysics Data System (ADS)

    Bouchon, M.; Karabulut, H.; Schmittbuhl, J.; Durand, V.; Marsan, D.; Renard, F.

    2012-12-01

    We review several sets of observations which suggest that the location of the epicenters of the 1939-1999 sequence of large earthquakes along the NAF obeys some mechanical logic. The 1999 Izmit earthquake nucleated in a zone of localized crustal extension oriented N10E (Crampin et al., 1985; Evans et al., 1987), nearly orthogonal to the strike of the NAF, thus releasing the normal stress on the fault in the area and facilitating rupture nucleation. The 1999 Duzce epicenter, located about 25km from the end of the Izmit rupture, is precisely near the start of a simple linear segment of the fault (Pucci et al., 2006) where supershear rupture occurred (Bouchon et al., 2001, Konca et al., 2010). Aftershock locations of the Izmit earthquake in the region (Gorgun et al., 2009) show that Duzce, at its start, was the first significant Izmit aftershock to occur on this simple segment. The rupture nucleated on the part of this simple segment which had been most loaded in Coulomb stress by the Izmit earthquake. Once rupture of this segment began, it seems logical that the whole segment would break, as its simple geometry suggests that no barrier was present to arrest rupture. Rupture of this segment, in turn, led to the rupture of adjacent segments. Like the Izmit earthquake, the 1943 Tosya and the 1944 Bolu-Gerede earthquakes nucleated near a zone of localized crustal extension. The long-range delayed triggering of extensional clusters observed after the Izmit/Duzce earthquakes (Durand et al., 2010) suggests a possible long-range delayed triggering of the 1943 shock by the 1942 Niksar earthquake. The 1942, 1957 Albant and 1967 Mudurnu earthquake nucleation locations further suggest that like what is observed for the Duzce earthquake, the previous earthquake ruptures stopped when encountering geometrically complex segments and nucleated again, past these segments.

  7. Preliminary investigation of some large landslides triggered by the 2008 Wenchuan earthquake, Sichuan Province, China

    USGS Publications Warehouse

    Wang, F.; Cheng, Q.; Highland, L.; Miyajima, M.; Wang, Hongfang; Yan, C.

    2009-01-01

    The M s 8.0 Wenchuan earthquake or "Great Sichuan Earthquake" occurred at 14:28 p.m. local time on 12 May 2008 in Sichuan Province, China. Damage by earthquake-induced landslides was an important part of the total earthquake damage. This report presents preliminary observations on the Hongyan Resort slide located southwest of the main epicenter, shallow mountain surface failures in Xuankou village of Yingxiu Town, the Jiufengchun slide near Longmenshan Town, the Hongsong Hydro-power Station slide near Hongbai Town, the Xiaojiaqiao slide in Chaping Town, two landslides in Beichuan County-town which destroyed a large part of the town, and the Donghekou and Shibangou slides in Qingchuan County which formed the second biggest landslide lake formed in this earthquake. The influences of seismic, topographic, geologic, and hydro-geologic conditions are discussed. ?? 2009 Springer-Verlag.

  8. Three-dimensional distribution of ionospheric anomalies prior to three large earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    He, Liming; Heki, Kosuke

    2016-07-01

    Using regional Global Positioning System (GPS) networks, we studied three-dimensional spatial structure of ionospheric total electron content (TEC) anomalies preceding three recent large earthquakes in Chile, South America, i.e., the 2010 Maule (Mw 8.8), the 2014 Iquique (Mw 8.2), and the 2015 Illapel (Mw 8.3) earthquakes. Both positive and negative TEC anomalies, with areal extent dependent on the earthquake magnitudes, appeared simultaneously 20-40 min before the earthquakes. For the two midlatitude earthquakes (2010 Maule and 2015 Illapel), positive anomalies occurred to the north of the epicenters at altitudes 150-250 km. The negative anomalies occurred farther to the north at higher altitudes 200-500 km. This lets the epicenter, the positive and negative anomalies align parallel with the local geomagnetic field, which is a typical structure of ionospheric anomalies occurring in response to positive surface electric charges.

  9. Investigation of the Seismic Nucleation Phase of Large Earthquakes Using Broadband Teleseismic Data

    NASA Astrophysics Data System (ADS)

    Burkhart, Eryn Therese

    The dynamic motion of an earthquake begins abruptly, but is often initiated by a short interval of weak motion called the seismic nucleation phase (SNP). Ellsworth and Beroza [1995, 1996] concluded that the SNP was detectable in near-source records of 48 earthquakes with moment magnitude (Mw), ranging from 1.1 to 8.1. They found that the SNP accounted for approximately 0.5% of the total moment and 1/6 of the duration of the earthquake. Ji et al [2010] investigated the SNP of 19 earthquakes with Mw greater than 8.0 using teleseismic broadband data. This study concluded that roughly half of the earthquakes had detectable SNPs, inconsistent with the findings of Ellsworth and Beroza [1995]. Here 69 earthquakes of Mw 7.5-8.0 from 1994 to 2011 are further examined. The SNP is clearly detectable using teleseismic data in 32 events, with 35 events showing no nucleation phase, and 2 events had insufficient data to perform stacking, consistent with the previous analysis. Our study also reveals that the percentage of the SNP events is correlated with the focal mechanism and hypocenter depths. Strike-slip earthquakes are more likely to exhibit a clear SNP than normal or thrust earthquakes. Eleven of 14 strike-slip earthquakes (78.6%) have detectable NSPs. In contrast, only 16 of 40 (40%) thrust earthquakes have detectable SNPs. This percentage also became smaller for deep events (33% for events with hypocenter depth>250 km). To understand why certain thrust earthquakes have a visible SNP, we examined the sediment thickness, age, and angle of the subducting plate of all thrust earthquakes in the study. We found that thrust events with shallow (600 m) on the subducting plate tend to have clear SNPs. If the SNP can be better understood in the future, it may help seismologists better understand the rupture dynamics of large earthquakes. Potential applications of this work could attempt to predict the magnitude of an earthquake seconds before it begins by measuring the SNP, vastly

  10. The large aftershocks triggered by the 2011 Mw 9.0 Tohoku-Oki earthquake, Japan

    NASA Astrophysics Data System (ADS)

    He, Ping; Wen, Yangmao; Xu, Caijun; Liu, Yang

    2013-09-01

    The Mw 9.0 Tohoku-Oki earthquake that occurred off the Pacific coast of Japan on March 11, 2011, was followed by thousands of aftershocks, both near the plate interface and in the crust of inland eastern Japan. In this paper, we report on two large, shallow crustal earthquakes that occurred near the Ibaraki-Fukushima prefecture border, where the background seismicity was low prior to the 2011 Tohoku-Oki earthquake. Using densely spaced geodetic observations (GPS and InSAR datasets), we found that two large aftershocks in the Iwaki and Kita-Ibarake regions (hereafter referred to as the Iwaki earthquake and the Kita-Ibarake earthquake) produced 2.1 m and 0.44 m of motion in the line-of-sight (LOS), respectively. The azimuth-offset method was used to obtain the preliminary location of the fault traces. The InSAR-based maximum offset and trace of the faults that produced the Iwaki earthquake are consistent with field observations. The fault location and geometry of these two earthquakes are constrained by a rectangular dislocation model in a multilayered elastic half-space, which indicates that the maximum slips for the two earthquakes are 3.28 m and 0.98 m, respectively. The Coulomb stress changes were calculated for the faults following the 2011 Mw 9.0 Tohoku-Oki earthquake based on the modeled slip along the fault planes. The resulting Coulomb stress changes indicate that the stresses on the faults increased by up to 1.1 MPa and 0.7 MPa in the Iwaki and Kita-Ibarake regions, respectively, suggesting that the Tohoku-Oki earthquake triggered the two aftershocks, supporting the results of seismic tomography.

  11. Exploring the uncertainty range of co-seismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2016-10-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average co-seismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same dataset exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that 1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated; 2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  12. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  13. Dynamic Triggering of Earthquakes in the Salton Sea Region of Southern California from Large Regional and Teleseismic Earthquakes

    NASA Astrophysics Data System (ADS)

    Doran, A.; Meng, X.; Peng, Z.; Wu, C.; Kilb, D. L.

    2010-12-01

    We perform a systematic survey of dynamically triggered earthquakes in the Salton Sea region of southern California using borehole seismic data recordings (2007 to present). We define triggered events as high-frequency seismic energy during large-amplitude seismic waves of distant earthquakes. Our mainshock database includes 26 teleseismic events (epicentral distances > 1000 km; Mw ≥ 7.5), and 8 regional events (epicentral distances 100 - 1000 km; Mw ≥ 5.5). Of these, 1 teleseismic and 7 regional events produce triggered seismic activity within our study region. The triggering mainshocks are not limited to specific azimuths. For example, triggering is observed following the 2008 Mw 6.0 Nevada earthquake to the north and the 2010 Mw7.2 Northern Baja California earthquake to the south. The peak ground velocities in our study region generated by the triggering mainshocks exceed 0.03 cm/s, which corresponds to a dynamic stress of ~2 kPa. This apparent triggering threshold is consistent with thresholds found in the Long Valley Caldera (Brodsky and Prejean, 2005), the Parkfield section of San Andreas Fault (Peng et al., 2009), and near the San Jacinto Fault (Kane et al., 2007). The triggered events occur almost instantaneously with the arrival of large amplitude seismic waves and appear to be modulated by the passing surface waves, similar to recent observations of triggered deep “non-volcanic” tremor along major plate boundary faults in California, Cascadia, Japan, and Taiwan (Peng and Gomberg, 2010). However, unlike these deep ‘tremor’ events, the triggered signals we find in this study have very short P- to S-arrival times, suggesting that they likely originate from brittle failure in the shallow crust. Confirming this, spectra of the triggered signals mimic spectra of typical shallow events in the region. Extending our observation time window to ~1 month following the mainshock event we find that for the 2010 Mw 7.2 Northern Baja California mainshock

  14. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  15. Detection of large prehistoric earthquakes in the pacific northwest by microfossil analysis.

    PubMed

    Mathewes, R W; Clague, J J

    1994-04-29

    Geologic and palynological evidence for rapid sea level change approximately 3400 and approximately 2000 carbon-14 years ago (3600 and 1900 calendar years ago) has been found at sites up to 110 kilometers apart in southwestern British Columbia. Submergence on southern Vancouver Island and slight emergence on the mainland during the older event are consistent with a great (magnitude M >/= 8) earthquake on the Cascadia subduction zone. The younger event is characterized by submergence throughout the region and may also record a plate-boundary earthquake or a very large crustal or intraplate earthquake. Microfossil analysis can detect small amounts of coseismic uplift and subsidence that leave little or no lithostratigraphic signature.

  16. Benefits of Earthquake Early Warning to Large Municipalities (Invited)

    NASA Astrophysics Data System (ADS)

    Featherstone, J.

    2013-12-01

    The City of Los Angeles has been involved in the testing of the Cal Tech Shake Alert, Earthquake Early Warning (EQEW) system, since February 2012. This system accesses a network of seismic monitors installed throughout California. The system analyzes and processes seismic information, and transmits a warning (audible and visual) when an earthquake occurs. In late 2011, the City of Los Angeles Emergency Management Department (EMD) was approached by Cal Tech regarding EQEW, and immediately recognized the value of the system. Simultaneously, EMD was in the process of finalizing a report by a multi-discipline team that visited Japan in December 2011, which spoke to the effectiveness of EQEW for the March 11, 2011 earthquake that struck that country. Information collected by the team confirmed that the EQEW systems proved to be very effective in alerting the population of the impending earthquake. The EQEW in Japan is also tied to mechanical safeguards, such as the stopping of high-speed trains. For a city the size and complexity of Los Angeles, the implementation of a reliable EQEW system will save lives, reduce loss, ensure effective and rapid emergency response, and will greatly enhance the ability of the region to recovery from a damaging earthquake. The current Shake Alert system is being tested at several governmental organizations and private businesses in the region. EMD, in cooperation with Cal Tech, identified several locations internal to the City where the system would have an immediate benefit. These include the staff offices within EMD, the Los Angeles Police Department's Real Time Analysis and Critical Response Division (24 hour crime center), and the Los Angeles Fire Department's Metropolitan Fire Communications (911 Dispatch). All three of these agencies routinely manage the collaboration and coordination of citywide emergency information and response during times of crisis. Having these three key public safety offices connected and included in the

  17. Large Chilean earthquakes and tsunamis of 1730 and 1751: new analysis of historical data

    NASA Astrophysics Data System (ADS)

    Udias, Agustin; Buforn, Elisa; Madariaga, Raul

    2013-04-01

    A large collection of contemporary documents from the Archivo de Indias (Seville, Spain) concerning the large Chilean earthquakes and tsunamis of 1730 and 1751 has been studied for the first time. The documents include official and private letters to the King of Spain, and proceedings, memorials and reports of the colonial administration. They provide detailed information about the characteristics and the damage produced by these two mega earthquakes. The 1730, the largest of the two earthquakes, with an estimated magnitude close to Mw = 9, affected a large region of more than 900 km length from Copiapó in the north to Concepción in the south, causing important damage in the capital Santiago. It was followed by a large tsunami which affected especially the two coastal cities of Valparaiso and Concepción. Twenty one years later in 1751, another earthquake caused damage to the region from Santiago to Valdivia. The tsunami destroyed again the city of Concepción and made necessary its relocation from the site at the town of Penco to its present site on the BioBio river. We suggest that this event was very similar in size and extent to that of Maule in 27 February 2010. It is estimated that the two earthquakes together broke the entire plate boundary in central Chile, along almost 900 km, from 30°S to 38°S. A possible repeat of the 1730 earthquake in the future presents a major risk for Central Chile.

  18. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  19. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  20. Large-scale Digitoxin Intoxication

    PubMed Central

    Lely, A. H.; Van Enter, C. H. J.

    1970-01-01

    Because of an error in the manufacture of digoxin tablets a large number of patients took tablets that contained 0·20 mg. of digitoxin and 0·05 mg. of digoxin instead of the prescribed 0·25 mg. of digoxin. The symptoms are described of 179 patients who took these tablets and suffered from digitalis intoxication. Of these patients, 125 had taken the faultily composed tablets for more than three weeks. In 48 patients 105 separate disturbances in rhythm or in atrioventricular conduction were observed on the electrocardiogram. Extreme fatigue and serious eye conditions were observed in 95% of the patients. Twelve patients had a transient psychosis. Extensive ophthalmological observations indicated that the visual complaints were most probably caused by a transient retrobulbar neuritis. PMID:5273245

  1. Repeating and not so Repeating Large Earthquakes in the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Singh, S.; Iglesias, A.; Perez-Campos, X.

    2013-12-01

    The rupture area and recurrence interval of large earthquakes in the mexican subduction zone are relatively small and almost the entire length of the zone has experienced a large (Mw≥7.0) earthquake in the last 100 years (Singh et al., 1981). Several segments have experienced multiple large earthquakes in this time period. However, as the rupture areas of events prior to 1973 are only approximately known, the recurrence periods are uncertain. Large earthquakes occurred in the Ometepec, Guerrero, segment in 1937, 1950, 1982 and 2012 (Singh et al., 1981). In 1982, two earthquakes (Ms 6.9 and Ms 7.0) occurred about 4 hours apart, one apparently downdip from the other (Astiz & Kanamori, 1984; Beroza et al. 1984). The 2012 earthquake on the other hand had a magnitude of Mw 7.5 (globalcmt.org), breaking approximately the same area as the 1982 doublet, but with a total scalar moment about three times larger than the 1982 doublet combined. It therefore seems that 'repeat earthquakes' in the Ometepec segment are not necessarily very similar one to another. The Central Oaxaca segment broke in large earthquakes in 1928 (Mw7.7) and 1978 (Mw7.7) . Seismograms for the two events, recorded at the Wiechert seismograph in Uppsala, show remarkable similarity, suggesting that in this area, large earthquakes can repeat. The extent to which the near-trench part of the fault plane participates in the ruptures is not well understood. In the Ometepec segment, the updip portion of the plate interface broke during the 25 Feb 1996 earthquake (Mw7.1), which was a slow earthquake and produced anomalously low PGAs (Iglesias et al., 2003). Historical records indicate that a great tsunamigenic earthquake, M~8.6, occurred in the Oaxaca region in 1787, breaking the Central Oaxaca segment together with several adjacent segments (Suarez & Albini 2009). Whether the updip portion of the fault broke in this event remains speculative, although plausible based on the large tsunami. Evidence from the

  2. Volcanic activity before and after large tectonic earthquakes: Observations and statistical significance

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Walter, Thomas R.

    2009-06-01

    The study of volcanic triggering and interaction with the tectonic surroundings has received special attention in recent years, using both direct field observations and historical descriptions of eruptions and earthquake activity. Repeated reports of clustered eruptions and earthquakes may imply that interaction is important in some subregions. However, the subregions likely to suffer such clusters have not been systematically identified, and the processes responsible for the observed interaction remain unclear. We first review previous works about the clustered occurrence of eruptions and earthquakes, and describe selected events. We further elaborate available databases and confirm a statistically significant relationship between volcanic eruptions and earthquakes on the global scale. Moreover, our study implies that closed volcanic systems in particular tend to be activated in association with a tectonic earthquake trigger. We then perform a statistical study at the subregional level, showing that certain subregions are especially predisposed to concurrent eruption-earthquake sequences, whereas such clustering is statistically less significant in other subregions. Based on this study, we argue that individual and selected observations may bias the perceptible weight of coupling. The activity at volcanoes located in the predisposed subregions (e.g., Japan, Indonesia, Melanesia), however, often unexpectedly changes in association with either an imminent or a past earthquake.

  3. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    PubMed

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-09-27

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  4. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source

  5. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  6. The typical seismic behavior in the vicinity of a large earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, M. V.; Tikhonov, I. N.

    2016-10-01

    The Global Centroid Moment Tensor catalog (GCMT) was used to construct the spatio-temporal generalized vicinity of a large earthquake (GVLE) and to investigate the behavior of seismicity in GVLE. The vicinity is made of earthquakes falling into the zone of influence of a large number (100, 300, or 1000) of largest earthquakes. The GVLE construction aims at enlarging the available statistics, diminishing a strong random component, and revealing typical features of pre- and post-shock seismic activity in more detail. As a result of the GVLE construction, the character of fore- and aftershock cascades was examined in more detail than was possible without of the use of the GVLE approach. As well, several anomalies in the behavior exhibited by a variety of earthquake parameters were identified. The amplitudes of all these anomalies increase with the approaching time of the generalized large earthquake (GLE) as the logarithm of the time interval from the GLE occurrence. Most of the discussed anomalies agree with common features well expected in the evolution of instability. In addition to these common type precursors, one earthquake-specific precursor was found. The decrease in mean earthquake depth presumably occurring in a smaller GVLE probably provides evidence of a deep fluid being involved in the process. The typical features in the evolution of shear instability as revealed in GVLE agree with results obtained in laboratory studies of acoustic emission (AE). The majority of the anomalies in earthquake parameters appear to have a secondary character, largely connected with an increase in mean magnitude and decreasing fraction of moderate size events (mw5.0-6.0) in the immediate GLE vicinity. This deficit of moderate size events could hardly be caused entirely by their incomplete reporting and can presumably reflect some features in the evolution of seismic instability.

  7. Dynamic Response and Ground-Motion Effects of Building Clusters During Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Isbiliroglu, Y. D.; Taborda, R.; Bielak, J.

    2012-12-01

    The objective of this study is to analyze the response of building clusters during earthquakes, the effect that they have on the ground motion, and how individual buildings interact with the surrounding soil and with each other. We conduct a series of large-scale, physics-based simulations that synthesize the earthquake source and the response of entire building inventories. The configuration of the clusters, defined by the total number of buildings, their number of stories, dynamic properties, and spatial distribution and separation, is varied for each simulation. In order to perform these simulations efficiently while recurrently modifying these characteristics without redoing the entire "source to building structure" simulation every time, we use the Domain Reduction Method (DRM). The DRM is a modular two-step finite-element methodology for modeling wave propagation problems in regions with localized features. It allows one to store and reuse the background motion excitation of subdomains without loss of information. Buildings are included in the second step of the DRM. Each building is represented by a block model composed of additional finite-elements in full contact with the ground. These models are adjusted to emulate the general geometric and dynamic properties of real buildings. We conduct our study in the greater Los Angeles basin, using the main shock of the 1994 Northridge earthquake for frequencies up to 5Hz. In the first step of the DRM we use a domain of 82 km x 82 km x 41 km. Then, for the second step, we use a smaller sub-domain of 5.12 km x 5.12 km x 1.28 km, with the buildings. The results suggest that site-city interaction effects are more prominent for building clusters in soft-soil areas. These effects consist in changes in the amplitude of the ground motion and dynamic response of the buildings. The simulations are done using Hercules, the parallel octree-based finite-element earthquake simulator developed by the Quake Group at Carnegie

  8. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  9. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  10. Comparison between scaling law and nonparametric Bayesian estimate for the recurrence time of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2009-04-01

    According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89

  11. Large Scale Metal Additive Techniques Review

    SciTech Connect

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W; Love, Lonnie J

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  12. Volcanic activity before and after large tectonic earthquakes: Observations and statistical significance

    NASA Astrophysics Data System (ADS)

    Eggert, S.; Walter, T. R.

    2009-04-01

    The study of volcanic triggering and coupling to the tectonic surroundings has received special attention in recent years, using both direct field observations and historical descriptions of eruptions and earthquake activity. Repeated reports of volcano-earthquake interactions in, e.g., Europe and Japan, may imply that clustered occurrence is important in some regions. However, the regions likely to suffer clustered eruption-earthquake activity have not been systematically identified, and the processes responsible for the observed interaction are debated. We first review previous works about the correlation of volcanic eruptions and earthquakes, and describe selected local clustered events. Following an overview of previous statistical studies, we further elaborate the databases of correlated eruptions and earthquakes from a global perspective. Since we can confirm a relationship between volcanic eruptions and earthquakes on the global scale, we then perform a statistical study on the regional level, showing that time and distance between events follow a linear relationship. In the time before an earthquake, a period of volcanic silence often occurs, whereas in the time after, an increase in volcanic activity is evident. Our statistical tests imply that certain regions are especially predisposed to concurrent eruption-earthquake pairs, e.g., Japan, whereas such pairing is statistically less significant in other regions, such as Europe. Based on this study, we argue that individual and selected observations may bias the perceptible weight of coupling. Volcanoes located in the predisposed regions (e.g., Japan, Indonesia, Melanesia), however, indeed often have unexpectedly changed in association with either an imminent or a past earthquake.

  13. Rapid determination of P wave-based energy magnitude: Insights on source parameter scaling of the 2016 Central Italy earthquake sequence

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Bindi, Dino; Brondi, Piero; Di Giacomo, Domenico; Parolai, Stefano; Zollo, Aldo

    2017-05-01

    We propose a P wave based procedure for the rapid estimation of the radiated seismic energy, and a novel relationship for obtaining an energy-based local magnitude (MLe) measure of the earthquake size. We apply the new procedure to the seismic sequence that struck Central Italy in 2016. Scaling relationships involving seismic moment and radiated energy are discussed for the Mw 6.0 Amatrice, Mw 5.9 Ussita, and Mw 6.5 Norcia earthquakes, including 35 ML > 4 aftershocks. The Mw 6.0 Amatrice earthquake shows the highest apparent stress, and the observed differences among the three main events highlight the dynamic heterogeneity with which large earthquakes can occur in Central Italy. Differences between estimates of MLe and Mw allows identification of events which are characterized by a higher proportion of energy being transferred to seismic waves, providing important real-time indications of earthquakes shaking potential.

  14. Seismicity trends and potential for large earthquakes in the Alaska-Aleutian region

    USGS Publications Warehouse

    Bufe, C.G.; Nishenko, S.P.; Varnes, D.J.

    1994-01-01

    The high likelihood of a gap-filling thrust earthquake in the Alaska subduction zone within this decade is indicated by two independent methods: analysis of historic earthquake recurrence data and time-to-failure analysis applied to recent decades of instrumental data. Recent (May 1993) earthquake activity in the Shumagin Islands gap is consistent with previous projections of increases in seismic release, indicating that this segment, along with the Alaska Peninsula segment, is approaching failure. Based on this pattern of accelerating seismic release, we project the occurrence of one or more M???7.3 earthquakes in the Shumagin-Alaska Peninsula region during 1994-1996. Different segments of the Alaska-Aleutian seismic zone behave differently in the decade or two preceding great earthquakes, some showing acceleration of seismic release (type "A" zones), while others show deceleration (type "D" zones). The largest Alaska-Aleutian earthquakes-in 1957, 1964, and 1965-originated in zones that exhibit type D behavior. Type A zones currently showing accelerating release are the Shumagin, Alaska Peninsula, Delarof, and Kommandorski segments. Time-to-failure analysis suggests that the large earthquakes could occur in these latter zones within the next few years. ?? 1994 Birkha??user Verlag.

  15. The effect of first waveform cut range to determine Mwp of large earthquake in Banda Sea

    NASA Astrophysics Data System (ADS)

    Ali, Y. H.; Wulandari, A.; Yatimantoro, T.

    2017-07-01

    The magnitude of the earthquake is one of the important parameters of earthquakes. The Moment magnitude (Mw) is a magnitude that best describes the strength of the earthquake. But determining Mw is very slow compared to the other magnitudes. We use Tsuboi et al,1995 formulation to determine the magnitude moment with a P wave (Mwp). We analyze 11 earthquakes in 2011-2015 with minimum Mw around 6 which have shallow to medium depth and using reference stations within 10-30 degrees from the epicenter of the earthquake. Our data references are Global Centroid Moment Tensor (Global CMT) data and the Incorporated Research Institution for Seismology (IRIS) data. We choose vertical component broadband data of the seismic waves 10 seconds before the time of P wave arrival to 130 seconds after the P wave arrival with interval 10 seconds. Our study area has a range coordinates between 125 E - 135 E and 9 S - 2.5 S. We found that the greater the magnitude of the earthquake the longer cut range required. The best time of cut range in the Mwp determination is 20 - 40 seconds of waveform cut range depend on the large of the earthquake magnitude. Our root mean square calculation are 0.158 that compared with IRIS data and 0.156 that compared with Global CMT.

  16. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  17. The Large -scale Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Flin, Piotr

    A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.

  18. What is a large-scale dynamo?

    NASA Astrophysics Data System (ADS)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  19. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-08-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  20. Significant lateral dip changes may have limited the scale of the 2015 Mw 7.8 Gorkha earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Walter, Thomas R.; Feng, Wanpeng; Chen, Yongshun; Huang, Qinghua

    2017-09-01

    The 2015 Mw 7.8 Gorkha earthquake has drawn interest due to its complex fault geometry. Both geodetic and geologic studies have focused on the dip variations. In this study we invert the coseismic geodetic data for the 2-D dip variations of the earthquake. The best fit model confirms that the dip varies with depth, and suggests that there is a significant lateral dip anomaly along strike. The depth-dependent dip variation suggests that the earthquake ruptured a ramp-flat fault. The shallow ramp may have prevented the rupture breaking through the surface. In addition, a lateral large-dip anomaly is found in the northeastern corner of the slip area, which supports the previous findings of inferred interseismic fault coupling, coseismic high-frequency radiations, and the aftershock mechanisms. This lateral dip anomaly is likely associated with local tearing within the Indian slab. It may have blocked the east-southeastward rupture propagations of the Gorkha earthquake, implying important controls on the earthquake scale and the spatial limits of ruptures.

  1. Observational constraints on earthquake source scaling: Understanding the limits in resolution

    USGS Publications Warehouse

    Hough, S.E.

    1996-01-01

    I examine the resolution of the type of stress drop estimates that have been used to place observational constraints on the scaling of earthquake source processes. I first show that apparent stress and Brune stress drop are equivalent to within a constant given any source spectral decay between ??1.5 and ??3 (i.e., any plausible value) and so consistent scaling is expected for the two estimates. I then discuss the resolution and scaling of Brune stress drop estimates, in the context of empirical Green's function results from recent earthquake sequences, including the 1992 Joshua Tree, California, mainshock and its aftershocks. I show that no definitive scaling of stress drop with moment is revealed over the moment range 1019-1025; within this sequence, however, there is a tendency for moderate-sized (M 4-5) events to be characterized by high stress drops. However, well-resolved results for recent M > 6 events are inconsistent with any extrapolated stress increase with moment for the aftershocks. Focusing on comer frequency estimates for smaller (M < 3.5) events, I show that resolution is extremely limited even after empirical Green's function deconvolutions. A fundamental limitation to resolution is the paucity of good signal-to-noise at frequencies above 60 Hz, a limitation that will affect nearly all surficial recordings of ground motion in California and many other regions. Thus, while the best available observational results support a constant stress drop for moderate-to large-sized events, very little robust observational evidence exists to constrain the quantities that bear most critically on our understanding of source processes: stress drop values and stress drop scaling for small events.

  2. Appearance ratio of earthquake surface rupture - About scaling low for Japanese Intraplate Earthquakes -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Irikura, K.

    2013-12-01

    A study on appearance ratio of the surface rupture is considered on using historical earthquake (ex. Takemura, 1998), also Kagawa et al (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated appearance indicates a sigmoid curve and rise sharply between Mj (Japan Meteorological Agency magnitude) =6.5 and Mj=7.2. However, these historical earthquake record between Mj = 6.5 to 7.2 are very law, therefore some scientist consider that the appearance ratio might be jumped up discontinuity between Mj = 6.5 to 7.2. In this study, we used historical intraplate earthquakes that were occurred around Japan from 1981 Nobi to 2013. Especially, after Hyogoken Nanbu Earthquake, many earthquakes around Mj 6.5 to 7.2 were occurred. The result of this study indicate that the appearance ratio increase between Mj = 6.5 to 7.2 not discontinuity but like logistic curve. Youngs et al. (2003), Petersen et al. (2011) and Moss and Ross (2011) are discussed about appearance ratio of the surface rupture using historical earthquake in the world. Their discussion are based on Mw, therefore, we cannot compare each other because we used Mj. Takemura (1990) were proposed a conversion equation that is Mw = 0.78Mj+1.08. However, nowadays Central Disaster Prevention Council in Japan (2005) derive a conversion equation that is Mw = 0.879Mj+0.536 shown in a regression line demanded by a principal component analysis The result of this study, the appearance ratio increase sharply between Mw = 6.3 to 7.0.

  3. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in

  4. Constraining the rupture processes of the intermediate and large earthquakes using geophysical data

    NASA Astrophysics Data System (ADS)

    Ji, C.

    2009-12-01

    Detailed mapping of spatiotemporal slip distributions of large earthquakes is one of the principal goals of seismology. Since the finite-fault inversion method was first introduced during the studies of the 1979 Imperial Valley, California, earthquake, it becomes the sharpest tool in the study of earthquake seismology. Various new developments in terms of source representations, inverse methods, objective functions, have been conducted even since to improve its resolution and robustness. The geophysical datasets other than seismic data, such as GPS, Interferometric and geological surface offsets, are also been included to extend the data coverage spatially. With recent developments in global broadband seismic instrumentations, it has become possible to routinely study the earthquake slip history in realtime, and proceed to predict the local damage. In this poster, I will summarize the our recent developments in the procedure of the realtime finite fault inversions, in terms of data coverage, fault geometry, earth structures, and error analysis.

  5. Irregular Recurrence of Large Earthquakes along the San Andreas Fault: Evidence from Trees

    NASA Astrophysics Data System (ADS)

    Jacoby, Gordon C.; Sheppard, Paul R.; Sieh, Kerry E.

    1988-07-01

    Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the ``San Juan Capistrano'' earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics.

  6. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  7. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  8. Constraining depth range of S wave velocity decrease after large earthquakes near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Wu, Chunquan; Delorey, Andrew; Brenguier, Florent; Hadziioannou, Celine; Daub, Eric G.; Johnson, Paul

    2016-06-01

    We use noise correlation and surface wave inversion to measure the S wave velocity changes at different depths near Parkfield, California, after the 2003 San Simeon and 2004 Parkfield earthquakes. We process continuous seismic recordings from 13 stations to obtain the noise cross-correlation functions and measure the Rayleigh wave phase velocity changes over six frequency bands. We then invert the Rayleigh wave phase velocity changes using a series of sensitivity kernels to obtain the S wave velocity changes at different depths. Our results indicate that the S wave velocity decreases caused by the San Simeon earthquake are relatively small (~0.02%) and access depths of at least 2.3 km. The S wave velocity decreases caused by the Parkfield earthquake are larger (~0.2%), and access depths of at least 1.2 km. Our observations can be best explained by material damage and healing resulting mainly from the dynamic stress perturbations of the two large earthquakes.

  9. Triggering of tsunamigenic aftershocks from large strike-slip earthquakes: Analysis of the November 2000 New Ireland earthquake sequence

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2005-10-01

    The November 2000 New Ireland earthquake sequence started with a Mw = 8.0 left-lateral main shock on 16 November and was followed by a series of aftershocks with primarily thrust mechanisms. The earthquake sequence was associated with a locally damaging tsunami on the islands of New Ireland and nearby New Britain, Bougainville, and Buka. Results from numerical tsunami-propagation models of the main shock and two of the largest thrust aftershocks (Mw > 7.0) indicate that the largest tsunami was caused by an aftershock located near the southeastern termination of the main shock, off the southern tip of New Ireland (Aftershock 1). Numerical modeling and tide gauge records at regional and far-field distances indicate that the main shock also generated tsunami waves. Large horizontal displacements associated with the main shock in regions of steep bathymetry accentuated tsunami generation for this event. Most of the damage on Bougainville and Buka Islands was caused by focusing and amplification of tsunami energy from a ridge wave between the source region and these islands. Modeling of changes in the Coulomb failure stress field caused by the main shock indicate that Aftershock 1 was likely triggered by static stress changes, provided the fault was on or synthetic to the New Britain interplate thrust as specified by the Harvard CMT mechanism. For other possible focal mechanisms of Aftershock 1 and the regional occurrence of thrust aftershocks in general, evidence for static stress change triggering is not as clear. Other triggering mechanisms such as changes in dynamic stress may also have been important. The 2000 New Ireland earthquake sequence provides evidence that tsunamis caused by thrust aftershocks can be triggered by large strike-slip earthquakes. Similar tectonic regimes that include offshore accommodation structures near large strike-slip faults are found in southern California, the Sea of Marmara, Turkey, along the Queen Charlotte fault in British Columbia

  10. Analysis of Luminescence Away from the Epicenter During a Large Earthquake: The Pisco, Peru Mw8 Earthquake

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Lira, J. A.

    2011-12-01

    The Mw8.0 earthquake in Pisco, Peru of August 15, 2007, produced high damage with a toll of 513 people dead, 2,291 wounded, 76,000 houses and buildings seriously damaged and 431,000 people overall affected. Co-seismic luminescence was reported by thousands of people along the central coast of Peru and especially in Lima, 150 km from the epicenter, being this the first large nighttime earthquake in about 100 years in a highly populated area. Pictures and videos of the lights are available, however those obtained so far, had little information on the timing and direction of the reported lights. Two important videos are analyzed, the first one from a fixed security camera, in order to determine differential time correlation between the timing of the lights recorded with ground acceleration registered by a three-axis accelerometer 500m away and very good results have been observed. This evidence contains important color, shape and timing information which is shown to be highly differential time correlated with the arrival of the seismic waves. Furthermore, the origin of the lights is on the top of a hilly island about 6 km off the coast of Lima where lights were reported in a written chronicle, to have been seen exactly 21 days before the Mega earthquake of October 28, 1746. This was the largest ever to happen in Peru, and produced a Tsunami that washed the port of Callao and reached up to 5km inland. The second video, from another security camera, in a different location, has been further analyzed in order to determine more exactly the direction of the lights and this new evidence will be presented. The fact that a notoriously large and well documented co-seismic luminous phenomena was video recorded more than 150 km from the epicenter during a very large earthquake, is emphasized together with historical documented evidence of pre-seismic luminous activity on the same island, during a mega earthquake of enormous proportions in Lima. Both previously mentioned videos

  11. Problems of seismic hazard estimation in regions with few large earthquakes: Examples from eastern Canada

    NASA Astrophysics Data System (ADS)

    Basham, P. W.; Adams, John

    1989-10-01

    Seismic hazard estimates and seismic zoning maps are based on an assessment of historical and recent seismieity and any correlations with geologic and tectonic features that might define the earthquake potential. Evidence is accumulating that the large earthquakes in eastern Canada ( M ~ 7) may be associated with the rift systems hat surround or break the integrity of the North American craton. The problem for seismic hazard estimation is that the larger historical earthquakes are not uniformly distributed along the Paleozoic St. Lawrence-Ottawa rift system and are too rare on the Mesozoic eastern margin rift to assess the overall seismogenic potential. Multiple source zone models for hazard estimation could include hypotheses of future M = 7 earthquakes at any location along these rift systems, but at a moderate probability (such as that used in the Canadian zoning maps) the resultant hazard will be so diluted that it will not result in adequate design against the near-source effects of such earthquakes. The near-source effects of large, rare earthquakes can, however, be accommodated in conservative codes and standards for critical facilities, if society is willing to pay the price.

  12. 3-D Numerical Modeling of Rupture Sequences of Large Shallow Subduction Earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Rice, J. R.

    2003-12-01

    We study the rupture behavior of large earthquakes on a 3-D shallow subduction fault governed by a rate and state friction law, and loaded by imposed slip at rate Vpl far downdip along the thrust interface. Friction properties are temperature, and hence depth, dependent, so that sliding is stable ( a - b > 0) at depths below about 30 km. To perturb the system into a nonuniform slip mode, if such a solution exists, we introduce small along-strike variations in either the constitutive parameters a and (a - b), or the effective normal stress, or the initial conditions. Our results do show complex, nonuniform slip behavior over the thousands of simulation years. Large events of multiple magnitudes occur at various along-strike locations, with different recurrence intervals, like those of natural interplate earthquakes. In the model, a large event usually nucleates in a less well locked gap region (slipping at order of 0.1 to 1 times the plate convergence rate Vpl) between more firmly locked regions (slipping at 10-4 to 10-2 Vpl) which coincide with the rupture zones of previous large events. It then propagates in both the dip and strike directions. Along-strike propagation slows down as the rupture front encounters neighboring locked zones, whose sizes and locking extents affect further propagation. Different propagation speeds at two fronts results in an asymmetric coseismic slip distribution, as is consistent with the slip inversion results of some large subduction earthquakes [e.g., Chlieh et al., 2003]. Current grid resolution is dictated by limitations of available computers and algorithms, and forces us to use constitutive length scales that are much larger than realistic lab values; that causes nucleation sizes to be in the several kilometers (rather than several meters) range. Thus there is a tentativeness to present conclusions. But with current resolution, we observe that the heterogeneous slip at seismogenic depths (i.e., where a - b < 0 ) is sometimes

  13. Nonlinear ionospheric responses to large-amplitude infrasonic-acoustic waves generated by undersea earthquakes

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-02-01

    Numerical models of ionospheric coupling with the neutral atmosphere are used to investigate perturbations of plasma density, vertically integrated total electron content (TEC), neutral velocity, and neutral temperature associated with large-amplitude acoustic waves generated by the initial ocean surface displacements from strong undersea earthquakes. A simplified source model for the 2011 Tohoku earthquake is constructed from estimates of initial ocean surface responses to approximate the vertical motions over realistic spatial and temporal scales. Resulting TEC perturbations from modeling case studies appear consistent with observational data, reproducing pronounced TEC depletions which are shown to be a consequence of the impacts of nonlinear, dissipating acoustic waves. Thermospheric acoustic compressional velocities are ˜±250-300 m/s, superposed with downward flows of similar amplitudes, and temperature perturbations are ˜300 K, while the dominant wave periodicity in the thermosphere is ˜3-4 min. Results capture acoustic wave processes including reflection, onset of resonance, and nonlinear steepening and dissipation—ultimately leading to the formation of ionospheric TEC depletions "holes"—that are consistent with reported observations. Three additional simulations illustrate the dependence of atmospheric acoustic wave and subsequent ionospheric responses on the surface displacement amplitude, which is varied from the Tohoku case study by factors of 1/100, 1/10, and 2. Collectively, results suggest that TEC depletions may only accompany very-large amplitude thermospheric acoustic waves necessary to induce a nonlinear response, here with saturated compressional velocities ˜200-250 m/s generated by sea surface displacements exceeding ˜1 m occurring over a 3 min time period.

  14. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  15. Radar Interferometric Applications for a Better Understanding of the Distribution, Controlling Factors, and Precursors of Large Earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Emil, M.; Sultan, M.; Fawzy, D. E.; Ahmed, M. E.; Chouinard, K.

    2012-12-01

    We are analyzing ERS-1 and ERS-2 and ENVISAT data to measure the spatial and temporal variations in three tectonically active areas near Izmit, Duzce and Van provinces in Turkey. We are using ERS-1 and ERS-2 data sets, which provide a longer time period of coverage (1992 to 2001). In addition, we will extend this forward to the present with ENVISAT radar data. The proposed activities can potentially provide predictive tools that can identify precursors to earthquakes and hence develop procedures to identify areas at risk. We are using radar interferometric techniques that have the ability of detecting deformation on the order of millimeters in scale over relatively large areas. We are applying the persistent scatterer and the small baseline subset (SBAS) techniques. A four fold exercise is being conducted: (1) extraction of land deformation rates and patterns from radar interferometry, (2) comparison and calibration of extracted rates to those extracted from existing geodetic ground stations, (3) identification of the natural factors (e.g., displacement along one or more faults) that are largely responsible for the observed deformation patterns, (4) utilizing the extracted deformation rates and/or patterns to identify areas prone to earthquake development in the near future, and (5) utilizing the extracted deformation rates or patterns to identify the areal extent of the domains affected by the earthquakes and the magnitude of the deformation following the earthquakes. The conditions in Turkey are typical of many of the world's areas that are witnessing continent to continent collisions. To date, applications similar to those advocated here for the assessment of ongoing land deformation in such areas and for identifying and characterizing land deformation as potential precursors to earthquakes have not been fully explored. Thus, the broader impact of this work lies in a successful demonstration of the advocated procedures in the study area which will invite similar

  16. Seismic sequences, swarms, and large earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Amato, Alessandro; Piana Agostinetti, Nicola; Selvaggi, Giulio; Mele, Franco

    2016-04-01

    In recent years, particularly after the L'Aquila 2009 earthquake and the 2012 Emilia sequence, the issue of earthquake predictability has been at the center of the discussion in Italy, not only within the scientific community but also in the courtrooms and in the media. Among the noxious effects of the L'Aquila trial there was an increase of scaremongering and false alerts during earthquake sequences and swarms, culminated in a groundless one-night evacuation in northern Tuscany in 2013. We have analyzed the Italian seismicity of the last decades in order to determine the rate of seismic sequences and investigate some of their characters, including frequencies, min/max durations, maximum magnitudes, main shock timing, etc. Selecting only sequences with an equivalent magnitude of 3.5 or above, we find an average of 30 sequences/year. Although there is an extreme variability in the examined parameters, we could set some boundaries, useful to obtain some quantitative estimates of the ongoing activity. In addition, the historical catalogue is rich of complex sequences in which one main shock is followed, seconds, days or months later, by another event with similar or higher magnitude We also analysed the Italian CPT11 catalogue (Rovida et al., 2011) between 1950 and 2006 to highlight the foreshock-mainshock event couples that were suggested in previous studies to exist (e.g. six couples, Marzocchi and Zhuang, 2011). Moreover, to investigate the probability of having random foreshock-mainshock couples over the investigated period, we produced 1000 synthetic catalogues, randomly distributing in time the events occured in such period. Preliminary results indicate that: (1) all but one of the the so-called foreshock-mainshock pairs found in Marzocchi and Zhuang (2011) fall inside previously well-known and studied seismic sequences (Belice, Friuli and Umbria-Marche), meaning that suggested foreshocks are also aftershocks; and (2) due to the high-rate of the italian

  17. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  18. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer

  19. Typical Scenario of Preparation, Implementation, and Aftershock Sequence of a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, Mikhail

    2016-04-01

    We have tried here to construct and examine the typical scenario of a large earthquake occurrence. The Harvard seismic moment GCMT catalog was used to construct the large earthquake generalized space-time vicinity (LEGV) and to investigate the seismicity behavior in LEGV. LEGV was composed of earthquakes falling into the zone of influence of any of the considerable number (100, 300, or 1,000) of largest earthquakes. The LEGV construction is aimed to enlarge the available statistics, diminish a strong random component, and to reveal in result the typical features of pre- and post-shock seismic activity in more detail. In result of the LEGV construction the character of fore- and aftershock cascades was examined in more detail than it was possible without of the use of the LEGV approach. It was shown also that the mean earthquake magnitude tends to increase, and the b-values, mean mb/mw ratios, apparent stress values, and mean depth tend to decrease. Amplitudes of all these anomalies increase with an approach to a moment of the generalized large earthquake (GLE) as a logarithm of time interval from GLE occurrence. Most of the discussed anomalies agree well with a common scenario of development of instability. Besides of such precursors of common character, one earthquake-specific precursor was found. The revealed decrease of mean earthquake depth during large earthquake preparation testifies probably for the deep fluid involvement in the process. The revealed in LEGV typical features of development of shear instability agree well with results obtained in laboratory acoustic emission (AE) study. Majority of the revealed anomalies appear to have a secondary character and are connected mainly with an increase in a mean earthquake magnitude in LEGV. The mean magnitude increase was shown to be connected mainly with a decrease of a portion of moderate size events (Mw 5.0 - 5.5) in a closer GLE vicinity. We believe that this deficit of moderate size events hardly can be

  20. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  1. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  2. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  3. W phase source inversion using high-rate regional GPS data for large earthquakes

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F.; Melgar, D.; Benavente, R.; Geng, J.; Barrientos, S.; Campos, J.

    2016-04-01

    W phase moment tensor inversion has proven to be a reliable method for rapid characterization of large earthquakes. For global purposes it is used at the United States Geological Survey, Pacific Tsunami Warning Center, and Institut de Physique du Globe de Strasbourg. These implementations provide moment tensors within 30-60 min after the origin time of moderate and large worldwide earthquakes. Currently, the method relies on broadband seismometers, which clip in the near field. To ameliorate this, we extend the algorithm to regional records from high-rate GPS data and retrospectively apply it to six large earthquakes that occurred in the past 5 years in areas with relatively dense station coverage. These events show that the solutions could potentially be available 4-5 min from origin time. Continuously improving GPS station availability and real-time positioning solutions will provide significant enhancements to the algorithm.

  4. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  5. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  6. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  7. Large-scale dynamics of magnetic helicity

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; Dallas, Vassilios

    2016-11-01

    In this paper we investigate the dynamics of magnetic helicity in magnetohydrodynamic (MHD) turbulent flows focusing at scales larger than the forcing scale. Our results show a nonlocal inverse cascade of magnetic helicity, which occurs directly from the forcing scale into the largest scales of the magnetic field. We also observe that no magnetic helicity and no energy is transferred to an intermediate range of scales sufficiently smaller than the container size and larger than the forcing scale. Thus, the statistical properties of this range of scales, which increases with scale separation, is shown to be described to a large extent by the zero flux solutions of the absolute statistical equilibrium theory exhibited by the truncated ideal MHD equations.

  8. Earthquakes

    EPA Pesticide Factsheets

    Information on this page will help you understand environmental dangers related to earthquakes, what you can do to prepare and recover. It will also help you recognize possible environmental hazards and learn what you can do to protect you and your family

  9. Building Inventory Database on the Urban Scale Using GIS for Earthquake Risk Assessment

    NASA Astrophysics Data System (ADS)

    Kaplan, O.; Avdan, U.; Guney, Y.; Helvaci, C.

    2016-12-01

    The majority of the existing buildings are not safe against earthquakes in most of the developing countries. Before a devastating earthquake, existing buildings need to be assessed and the vulnerable ones must be determined. Determining the seismic performance of existing buildings which is usually made with collecting the attributes of existing buildings, making the analysis and the necessary queries, and producing the result maps is very hard and complicated procedure that can be simplified with Geographic Information System (GIS). The aim of this study is to produce a building inventory database using GIS for assessing the earthquake risk of existing buildings. In this paper, a building inventory database for 310 buildings, located in Eskisehir, Turkey, was produced in order to assess the earthquake risk of the buildings. The results from this study show that 26% of the buildings have high earthquake risk, 33% of the buildings have medium earthquake risk and the 41% of the buildings have low earthquake risk. The produced building inventory database can be very useful especially for governments in dealing with the problem of determining seismically vulnerable buildings in the large existing building stocks. With the help of this kind of methods, determination of the buildings, which may collapse and cause life and property loss during a possible future earthquake, will be very quick, cheap and reliable.

  10. Scaling A Moment-Rate Function For Small To Large Magnitude Events

    NASA Astrophysics Data System (ADS)

    Archuleta, Ralph; Ji, Chen

    2017-04-01

    Since the 1980's seismologists have recognized that peak ground acceleration (PGA) and peak ground velocity (PGV) scale differently with magnitude for large and moderate earthquakes. In a recent paper (Archuleta and Ji, GRL 2016) we introduced an apparent moment-rate function (aMRF) that accurately predicts the scaling with magnitude of PGA, PGV, PWA (Wood-Anderson Displacement) and the ratio PGA/2πPGV (dominant frequency) for earthquakes 3.3 ≤ M ≤ 5.3. This apparent moment-rate function is controlled by two temporal parameters, tp and td, which are related to the time for the moment-rate function to reach its peak amplitude and the total duration of the earthquake, respectively. These two temporal parameters lead to a Fourier amplitude spectrum (FAS) of displacement that has two corners in between which the spectral amplitudes decay as 1/f, f denotes frequency. At higher or lower frequencies, the FAS of the aMRF looks like a single-corner Aki-Brune omega squared spectrum. However, in the presence of attenuation the higher corner is almost certainly masked. Attempting to correct the spectrum to an Aki-Brune omega-squared spectrum will produce an "apparent" corner frequency that falls between the double corner frequency of the aMRF. We reason that the two corners of the aMRF are the reason that seismologists deduce a stress drop (e.g., Allmann and Shearer, JGR 2009) that is generally much smaller than the stress parameter used to produce ground motions from stochastic simulations (e.g., Boore, 2003 Pageoph.). The presence of two corners for the smaller magnitude earthquakes leads to several questions. Can deconvolution be successfully used to determine scaling from small to large earthquakes? Equivalently will large earthquakes have a double corner? If large earthquakes are the sum of many smaller magnitude earthquakes, what should the displacement FAS look like for a large magnitude earthquake? Can a combination of such a double-corner spectrum and random

  11. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  12. Large Subduction Earthquakes along the fossil MOHO in Alpine Corsica: what was the role of fluids?

    NASA Astrophysics Data System (ADS)

    Andersen, Torgeir B.; Deseta, Natalie; Silkoset, Petter; Austrheim, Håkon; Ashwal, Lewis D.

    2014-05-01

    Intermediate depth subduction earthquakes abruptly release vast amounts of energy to crust and mantle lithosphere. The products of such drastic deformation events can only rarely be observed in the field because they are mostly permanently lost by the subduction. We present new observations of deformation products formed by large fossil subduction earthquakes in Alpine Corsica. These are formed by a few very large and numerous small intermediate-depth earthquakes along the exhumed palaeo-Moho in the Alpine Liguro-Piemontese basin, which together with the 'schistes-lustrés complex' experienced blueschist- to lawsonite-eclogite facies metamorphism during the Alpine subduction. The abrupt release of energy resulted in localized shear heating that completely melted both gabbro and peridotite along the Moho. The large volumes of melts that were generated by at most a few very large earthquakes along the Moho can be studied in the fault- and injection vein breccia complex that is preserved in a segment along the Moho fault. The energy required for wholesale melting of a large volume of peridotite pr. m2 fault plane, combined with estimates of stress-drops show that a few large earthquakes took place along the Moho of the subducting plate. Since these fault rocks represent intra-plate seismicity we suggest they formed along the lower seismogenic zone by analogy with present-day subduction. As demonstrated in previous work (detailed petrography and EBSD) by our research team, there is no evidence for prograde dehydration reactions leading up to the co-seismic slip events. Instead we show that local crystal-plastic deformation in olivine and shear heating was more significant for the run-away co-seismic failure than a solid-state dehydration reaction weakening. We therefore disregard dehydration embrittlement as a weakening mechanism for these events, and suggest that shear heating may be the most important weakening mechanism for intermediate depth earthquakes.

  13. Recurrent large earthquakes in a fault region: What can be inferred from small and intermediate events?

    NASA Astrophysics Data System (ADS)

    Zoeller, G.; Hainzl, S.; Holschneider, M.

    2008-12-01

    We present a renewal model for the recurrence of large earthquakes in a fault zone consisting of a major fault and surrounding smaller faults with Gutenberg-Richter type seismicity represented by seismic moment release drawn from a truncated power-law distribution. The recurrence times of characteristic earthquakes for the major fault are explored. It is continuously loaded (plate motion) and undergoes positive and negative fluctuations due to adjacent smaller faults, with a large number Neq of such changes between two major earthquakes. Since the distribution has a finite variance, in the limit Neq→∞ the central limit theorem implies that the recurrence times follow a Brownian passage-time (BPT) distribution. This allows to calculate individual recurrence time distributions for specific fault zones without tuning free parameters: the mean recurrence time can be estimated from geological or paleoseismic data, and the standard deviation is determined from the frequency-size distribution, namely the Richter b value, of an earthquake catalog. The approach is demonstrated for the Parkfield segment of the San Andreas fault in California as well as for a long simulation of a numerical fault model. Assuming power-law distributed earthquake magnitudes up to the size of the recurrent Parkfield event (M=6), we find a coefficient of variation that is higher than the value obtained by a direct fit of the BPT distribution to seven large earthquakes. Finally we show that uncertainties in the earthquake magnitudes, e.g. from magnitude grouping, can cause a significant bias in the results. A method to correct for the bias as well as a Baysian technique to account for evolving data are provided.

  14. Recurrent large earthquakes in a fault region: What can be inferred from small and intermediate events?

    NASA Astrophysics Data System (ADS)

    Zöller, G.; Hainzl, S.; Holschneider, M.

    2009-04-01

    We present a renewal model for the recurrence of large earthquakes in a fault zone consisting of a major fault and surrounding smaller faults with Gutenberg-Richter type seismicity represented by seismic moment release drawn from a truncated power-law distribution. The recurrence times of characteristic earthquakes for the major fault are explored. It is continuously loaded (plate motion) and undergoes positive and negative fluctuations due to adjacent smaller faults, with a large number Neq of such changes between two major earthquakes. Since the distribution has a finite variance, in the limit Neq →ž the central limit theorem implies that the recurrence times follow a Brownian passage-time (BPT) distribution. This allows to calculate individual recurrence time distributions for specific fault zones without tuning free parameters: the mean recurrence time can be estimated from geological or paleoseismic data, and the standard deviation is determined from the frequency-size distribution, namely the Richter b value, of an earthquake catalog. The approach is demonstrated for the Parkfield segment of the San Andreas fault in California as well as for a long simulation of a numerical fault model. Assuming power-law distributed earthquake magnitudes up to the size of the recurrent Parkfield event (M = 6), we find a coefficient of variation that is higher than the value obtained by a direct fit of the BPT distribution to seven large earthquakes. Finally we show that uncertainties in the earthquake magnitudes, e.g. from magnitude grouping, can cause a significant bias in the results. A method to correct for the bias as well as a Baysian technique to account for evolving data are provided.

  15. Unexpected geological impacts associated with large earthquakes and tsunamis in northern Honshu, Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Goff, J. R.

    2013-12-01

    Palaeoseismic research in areas adjacent to subduction zones has traditionally been concerned with identifying geological or geomorphological features associated with the immediate effects of past earthquakes, such as tsunamis, uplift or subsidence, with the aim of estimating earthquake magnitude and/or frequency. However, there are also other features in the landscape that can offer some insights into the past earthquake and tsunami history of a region. The study of coastal dune systems as palaeoseismic indicators is still in its infancy, but can provide useful evidence of past large earthquakes and by association, the tsunamis they generated. On a catchment-wide basis, past research has linked a sequence of environmental changes such as forest disturbance, landslides, river aggradation and rapid coastal dune building as geomorphological after-effects (in addition to tsunamis) of a large earthquake. In this model large pulses of sediment created by co-seismic landsliding in the upper catchment are moved rapidly to the coast where they leave a clear signature in the landscape. Coarser sediments form an aggradation surfaces and finer sediments form a new coastal dune or beach ridge. Coastal dune ridge systems are not exclusively associated with seismically active areas, but where they do occur in such places their potential use as palaeoseismic indicators is often ignored. Data are presented first of all about the beach ridges of the Sendai Plain where investigations have been carried out following the 2011 Tohoku-oki earthquake and tsunami. A wider regional picture of both palaeoseismicity, palaeotsunamis and beach ridge formation is then discussed. Existing data indicate a strong correlation between past earthquakes and the timing of beach ridge formation over the past 5000 years, however it seems likely that there is a far more detailed record still preserved in Japan's beach ridges and suggestions are offered on the directions for future research in this area.

  16. Observations of large earthquakes in the Mexican subduction zone over 110 years

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, Vala; Krishna Singh, Shri; Martínez-Peláez, Liliana; Garza-Girón, Ricardo; Lund, Björn; Ji, Chen

    2016-04-01

    Fault slip during an earthquake is observed to be highly heterogeneous, with areas of large slip interspersed with areas of smaller or even no slip. The cause of the heterogeneity is debated. One hypothesis is that the frictional properties on the fault are heterogeneous. The parts of the rupture surface that have large slip during earthquakes are coupled more strongly, whereas the areas in between and around creep continuously or episodically. The continuously or episodically creeping areas can partly release strain energy through aseismic slip during the interseismic period, resulting in relatively lower prestress than on the coupled areas. This would lead to subsequent earthquakes having large slip in the same place, or persistent asperities. A second hypothesis is that in the absence of creeping sections, the prestress is governed mainly by the accumulative stress change associated with previous earthquakes. Assuming homogeneous frictional properties on the fault, a larger prestress results in larger slip, i.e. the next earthquake may have large slip where there was little or no slip in the previous earthquake, which translates to non-persistent asperities. The study of earthquake cycles are hampered by short time period for which high quality, broadband seismological and accelerographic records, needed for detailed studies of slip distributions, are available. The earthquake cycle in the Mexican subduction zone is relatively short, with about 30 years between large events in many places. We are therefore entering a period for which we have good records for two subsequent events occurring in the same segment of the subduction zone. In this study we compare seismograms recorded either at the Wiechert seismograph or on a modern broadband seismometer located in Uppsala, Sweden for subsequent earthquakes in the Mexican subduction zone rupturing the same patch. The Wiechert seismograph is unique in the sense that it recorded continuously for more than 80 years

  17. Slip Distribution of Two Recent Large Earthquakes in the Guerrero Segment of the Mexican Subduction Zone, and Their Relation to Previous Earthquakes, Silent Slip Events and Seismic Gaps

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Ji, C.; Iglesias, A.; Cruz-Atienza, V. M.; Singh, S. K.

    2016-12-01

    In 2012 and 2014 mega-thrust earthquakes occurred approximately 300 km apart, in the state of Guerrero, Mexico. The westernmost half of the segment between them has not had a large earthquake in at least 100 years and most of the easternmost half last broke in 1957. However, down dip of both earthquakes, silent slip events have been reported, as well as in the gap between them (Kostoglodov et al 2003, Graham 2014). There are indications that the westernmost half has different frictional properties than the areas surrounds it. However, the two events at the edges of the zone also seem to behave in different manners, indicating a broad range of frictional properties in this area, with changes occurring over short distances. The 2012/03/20, M7.5 earthquake occurred near the Guerrero-Oaxaca border, between the towns of Ometepec (Gro.) and Pinotepa Nacional (Oax.). This earthquake is noteworthy for breaking the same asperities as two previously recorded earthquakes, the M7.2 1937 and M6.9 1982(a) earthquakes, in very large "repeating earthquakes". Furthermore, the density of repeating smaller events is larger in this zone than in other parts of the subduction zone (Dominguez et al, submitted) and this earthquake has had very many aftershocks for its size (UNAM Seis. group, 2013). The 2012 event may have broken two asperities (UNAM Seis. group, 2013). How the two asperities relate to the previous relatively smaller "large events", to the repeating earthquakes, the high number of aftershocks and to the slow slip event is not clear. The 2014/04/18 M 7.2 earthquake broke a patch on the edge of the Guerrero gap, that previously broke in the 1979 M7.4 earthquake as well as the 1943 M 7.4 earthquake. This earthquake, despite being smaller, had a much larger duration, few aftershocks and clearly ruptured two separate patches (UNAM Seis. group 2015). In this work we estimate the slip distributions for the 2012 and 2014 earthquakes, by combining the data used separately in

  18. Reactivity of seismicity rate to static Coulomb stress changes of two consecutive large earthquakes in the central Philippines

    NASA Astrophysics Data System (ADS)

    Dianala, J. D. B.; Aurelio, M.; Rimando, J. M.; Taguibao, K.

    2015-12-01

    In a region with little understanding in terms of active faults and seismicity, two large-magnitude reverse-fault related earthquakes occurred within 100km of each other in separate islands of the Central Philippines—the Mw=6.7 February 2012 Negros earthquake and the Mw=7.2 October 2013 Bohol earthquake. Based on source faults that were defined using onshore, offshore seismic reflection, and seismicity data, stress transfer models for both earthquakes were calculated using the software Coulomb. Coulomb stress triggering between the two main shocks is unlikely as the stress change caused by Negros earthquake on the Bohol fault was -0.03 bars. Correlating the stress changes on optimally-oriented reverse faults with seismicity rate changes shows that areas that decreased both in static stress and seismicity rate after the first earthquake were then areas with increased static stress and increased seismicity rate caused by the second earthquake. These areas with now increased stress, especially those with seismicity showing reactivity to static stress changes caused by the two earthquakes, indicate the presence of active structures in the island of Cebu. Comparing the history of instrumentally recorded seismicity and the recent large earthquakes of Negros and Bohol, these structures in Cebu have the potential to generate large earthquakes. Given that the Philippines' second largest metropolitan area (Metro Cebu) is in close proximity, detailed analysis of the earthquake potential and seismic hazards in these areas should be undertaken.

  19. Spatiotemporal seismic velocity change in the Earth's subsurface associated with large earthquake: contribution of strong ground motion and crustal deformation

    NASA Astrophysics Data System (ADS)

    Sawazaki, K.

    2016-12-01

    It is well known that seismic velocity of the subsurface medium changes after a large earthquake. The cause of the velocity change is roughly attributed to strong ground motion (dynamic strain change), crustal deformation (static strain change), and fracturing around the fault zone. Several studies have revealed that the velocity reduction down to several percent concentrates at the depths shallower than several hundred meters. The amount of velocity reduction correlates well with the intensity of strong ground motion, which indicates that the strong motion is the primary cause of the velocity reduction. Although some studies have proposed contributions of coseismic static strain change and fracturing around fault zone to the velocity change, separation of their contributions from the site-related velocity change is usually difficult. Velocity recovery after a large earthquake is also widely observed. The recovery process is generally proportional to logarithm of the lapse time, which is similar to the behavior of "slow dynamics" recognized in laboratory experiments. The time scale of the recovery is usually months to years in field observations, while it is several hours in laboratory experiments. Although the factor that controls the recovery speed is not well understood, cumulative strain change due to post-seismic deformation, migration of underground water, mechanical and chemical reactions on the crack surface could be the candidate. In this study, I summarize several observations that revealed spatiotemporal distribution of seismic velocity change due to large earthquakes; especially I focus on the case of the M9.0 2011 Tohoku earthquake. Combining seismograms of Hi-net (high-sensitivity) and KiK-net (strong motion), geodetic records of GEONET and the seafloor GPS/Acoustic ranging, I investigate contribution of the strong ground motion and crustal deformation to the velocity change associated with the Tohoku earthquake, and propose a gross view of

  20. The dynamic response of geomagnetic sudden commencement to tectonic zone and large earthquakes in China

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Zheng, J.; Wang, Z.; Lin, Y.

    2009-12-01

    Based on the skin effect of EM waves, geomagnetic storm variations can be used to study the subsurface tectonic structure and faulting to have information of penetration view of the earth’s interior. We collected geomagnetic storm report data for 8 years 2000-2007 from 35 geomagnetic stations in China and calculated the amplitude of the vertical component of geomagnetic storm sudden commencement (△Zssc) to study the correlations between △Zssc and the occurrence of Ms=6.0-8.1 earthquakes in China and its vicinity (70-140°E, 15-55°N) in the period of Year 2000 to 2008. We found that there were significant correlations between the spatial distribution of amplitude changes of △Zssc, fault activity and occurrence of large earthquakes. Our study provides a method to monitor possibility of medium-to-short term (0.5-12 months) earthquake. There are 5 △Zssc zero isoporic lines delineating 5 areas normally distributed in China where we called Zero Isoporic Zone (ZIZ) as following, and apparently they coincide with several tectonic zones and seismic gaps from the past 60 years. 1.△Zssc zero isoporic line in eastern China, extending along the rivers of Heilongjiang and Yalu - Eastern Coastal Area - Taiwan Strait - the South China Sea. 2. ZIZ in Luliang Mountain area, where is east to the Great Bend of the Yellow River; 3. ZIZ in Wuling Mountain area is in south of Yangtze River; 4. ZIZ in Longmenshan Fault Zone in central China. This is one area to note in particular. 5. ZIZ in Middle and South Yunnan region. Longmenshan fault zone located at the geographic center of China, where is a seismic sensitive band to correspond to great earthquakes of around Ms8.0. On May 12, 2008, Wenchuan Earthquake of Ms 8.0 occurred in this area. We noticed that, over the period of January 2000 to May 2008, in 5 great earthquakes of Ms=7.8-8.1 in China and its vicinity, 2 to 5 months before the earthquakes, the ZIZ in Longmenshan Fault Zone showed the translation and deformation

  1. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  2. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  3. Large fault slip peaking at trench in the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Tianhaozhe; Wang, Kelin; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2017-01-01

    During the 2011 magnitude 9 Tohoku-oki earthquake, very large slip occurred on the shallowest part of the subduction megathrust. Quantitative information on the shallow slip is of critical importance to distinguishing between different rupture mechanics and understanding the generation of the ensuing devastating tsunami. However, the magnitude and distribution of the shallow slip are essentially unknown due primarily to the lack of near-trench constraints, as demonstrated by a compilation of 45 rupture models derived from a large range of data sets. To quantify the shallow slip, here we model high-resolution bathymetry differences before and after the earthquake across the trench axis. The slip is determined to be about 62 m over the most near-trench 40 km of the fault with a gentle increase towards the trench. This slip distribution indicates that dramatic net weakening or strengthening of the shallow fault did not occur during the Tohoku-oki earthquake.

  4. Large fault slip peaking at trench in the 2011 Tohoku-oki earthquake.

    PubMed

    Sun, Tianhaozhe; Wang, Kelin; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2017-01-11

    During the 2011 magnitude 9 Tohoku-oki earthquake, very large slip occurred on the shallowest part of the subduction megathrust. Quantitative information on the shallow slip is of critical importance to distinguishing between different rupture mechanics and understanding the generation of the ensuing devastating tsunami. However, the magnitude and distribution of the shallow slip are essentially unknown due primarily to the lack of near-trench constraints, as demonstrated by a compilation of 45 rupture models derived from a large range of data sets. To quantify the shallow slip, here we model high-resolution bathymetry differences before and after the earthquake across the trench axis. The slip is determined to be about 62 m over the most near-trench 40 km of the fault with a gentle increase towards the trench. This slip distribution indicates that dramatic net weakening or strengthening of the shallow fault did not occur during the Tohoku-oki earthquake.

  5. Large fault slip peaking at trench in the 2011 Tohoku-oki earthquake

    PubMed Central

    Sun, Tianhaozhe; Wang, Kelin; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2017-01-01

    During the 2011 magnitude 9 Tohoku-oki earthquake, very large slip occurred on the shallowest part of the subduction megathrust. Quantitative information on the shallow slip is of critical importance to distinguishing between different rupture mechanics and understanding the generation of the ensuing devastating tsunami. However, the magnitude and distribution of the shallow slip are essentially unknown due primarily to the lack of near-trench constraints, as demonstrated by a compilation of 45 rupture models derived from a large range of data sets. To quantify the shallow slip, here we model high-resolution bathymetry differences before and after the earthquake across the trench axis. The slip is determined to be about 62 m over the most near-trench 40 km of the fault with a gentle increase towards the trench. This slip distribution indicates that dramatic net weakening or strengthening of the shallow fault did not occur during the Tohoku-oki earthquake. PMID:28074829

  6. Precursory measure of interoccurrence time associated with large earthquakes in the Burridge-Knopoff model

    SciTech Connect

    Hasumi, Tomohiro

    2008-11-13

    We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

  7. Large-scale cortical networks and cognition.

    PubMed

    Bressler, S L

    1995-03-01

    The well-known parcellation of the mammalian cerebral cortex into a large number of functionally distinct cytoarchitectonic areas presents a problem for understanding the complex cortical integrative functions that underlie cognition. How do cortical areas having unique individual functional properties cooperate to accomplish these complex operations? Do neurons distributed throughout the cerebral cortex act together in large-scale functional assemblages? This review examines the substantial body of evidence supporting the view that complex integrative functions are carried out by large-scale networks of cortical areas. Pathway tracing studies in non-human primates have revealed widely distributed networks of interconnected cortical areas, providing an anatomical substrate for large-scale parallel processing of information in the cerebral cortex. Functional coactivation of multiple cortical areas has been demonstrated by neurophysiological studies in non-human primates and several different cognitive functions have been shown to depend on multiple distributed areas by human neuropsychological studies. Electrophysiological studies on interareal synchronization have provided evidence that active neurons in different cortical areas may become not only coactive, but also functionally interdependent. The computational advantages of synchronization between cortical areas in large-scale networks have been elucidated by studies using artificial neural network models. Recent observations of time-varying multi-areal cortical synchronization suggest that the functional topology of a large-scale cortical network is dynamically reorganized during visuomotor behavior.

  8. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  9. Does hydrologic circulation mask frictional heat on faults after large earthquakes?

    NASA Astrophysics Data System (ADS)

    Fulton, Patrick M.; Harris, Robert N.; Saffer, Demian M.; Brodsky, Emily E.

    2010-09-01

    Knowledge of frictional resistance along faults is important for understanding the mechanics of earthquakes and faulting. The clearest in situ measure of fault friction potentially comes from temperature measurements in boreholes crossing fault zones within a few years of rupture. However, large temperature signals from frictional heating on faults have not been observed. Unambiguously interpreting the coseismic frictional resistance from small thermal perturbations observed in borehole temperature profiles requires assessing the impact of other potentially confounding thermal processes. We address several issues associated with quantifying the temperature signal of frictional heating including transient fluid flow associated with the earthquake, thermal disturbance caused by borehole drilling, and heterogeneous thermal physical rock properties. Transient fluid flow is investigated using a two-dimensional coupled fluid flow and heat transport model to evaluate the temperature field following an earthquake. Simulations for a range of realistic permeability, frictional heating, and pore pressure scenarios show that high permeabilities (>10-14 m2) are necessary for significant advection within the several years after an earthquake and suggest that transient fluid flow is unlikely to mask frictional heat anomalies. We illustrate how disturbances from circulating fluids during drilling diffuse quickly leaving a robust signature of frictional heating. Finally, we discuss the utility of repeated borehole temperature profiles for discriminating between different interpretations of thermal perturbations. Our results suggest that temperature anomalies from even low friction should be detectable at depths >1 km 1 to 2 years after a large earthquake and that interpretations of low friction from existing data are likely robust.

  10. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    USGS Publications Warehouse

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  11. 3-D structure of ionospheric anomalies immediately before large earthquakes: the 2015 Illapel (Mw8.3) and 2016 Kumamoto (Mw7.0) cases

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.; Muafiry, I. N.

    2016-12-01

    We developed a simple program to perform three-dimensional (3-D) tomography of ionospheric anomalies observed using Global Navigation Satellite System (GNSS), and applied it for cases of ionospheric anomalies prior to two recent earthquakes, i.e. (1) positive and negative TEC anomalies starting 20 minutes before the 2015 September Illapel earthquake, Central Chile, and (2) stagnant MSTID that appeared 20-30 minutes before the 2016 April Kumamoto earthquake (mainshock), Kyushu, SW Japan, and stayed there until the earthquake occurred. Regarding (1), we analyzed GNSS data before and after three large earthquakes in Chile, and have reported that both positive and negative anomalies of ionospheric Total Electron Content (TEC) started 40 minutes (2010 Maule) and 20 minutes (2014 Iquique and 2015 Illapel) before earthquakes in He and Heki (2016 GRL). For the 2015 event, we further suggested that positive and negative anomalies occurred at altitudes of 200 and 400 km, respectively. This makes the epicenter, the positive anomaly, and the negative anomaly line up along the local geomagnetic field, consistent with the structure expected to occur in response to surface positive charges (e.g. Kuo et al., 2014 JGR). As for (2), we looked for ionospheric anomalies before the foreshock (Mw6.2) and the mainshock (Mw7.0) of the 2016 Kumamoto earthquakes, shallow inland earthquakes, using TEC derived from the Japanese dense GNSS network. Although we did not find anomalies as often seen before larger earthquakes (e.g. Heki and Enomoto, 2015 JGR), we found that a stationary linear positive TEC anomaly, with a shape similar to a night-time medium-scale traveling ionospheric disturbance (MSTID), emerged just above the epicenter 20 minutes before the mainshock. Unlike typical night-time MSTID, it did not propagate southwestward; instead, its positive crest stayed above the epicenter for 30 min. (see attached figure). This unusual behavior might be linked to crust-origin electric fields.

  12. Understanding Local-Scale Fault Interaction Through Seismological Observation and Numerical Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Kroll, Kayla Ann

    A number of outstanding questions in earthquake physics revolve around under- standing the relationships among local-scale stress changes, fault interactions (i.e. how stresses are transferred) and earthquake response to stress changes. Here, I employ seismological observations and numerical simulation tools to investigate how stress changes from a mainshock, or by fluid injection, can either aid or hinder further earthquake activity. Chapter 2.2 couples Coulomb stress change models with rate- and state-dependent friction to model the time-dependent evolution of complex aftershock activity following the 2010 El Mayor-Cucapah earthquake. Part III focuses on numerical simulations of earthquake sequences with the multi-cycle earthquake simulator, RSQSim. I use RSQSim in two applications; 1) multi-cycle simulation of processes that controlling earthquake rupture along parallel, but discontinuous, offset faults (Chapter 3), and 2) investigation of relationships between injection of fluids into the subsurface and the characteristics of the resulting induced seismicity (Chapter 4). Results presented in Chapter 2.2 demonstrate that both increases and decreases in seismicity rate are correlated with regions of positive and negative Coulomb stress change, respectively. We show that the stress shadow effect can be delayed in time when two faulting populations are active within the same region. In Chapter 3, we show that the pre-rupture stress distribution on faults governs the location of rupture re-nucleation on the receiver fault strand. Additionally, through analysis of long-term multi-cycle simulations, we find that ruptures can jump larger offsets more frequently when source and receiver fault ruptures are delayed in time. Results presented in Chapter 4 demonstrate that induced earthquake sequences are sensitive to the constitutive parameters, a and b, of the rate-state formulation. Finally, we find the rate of induced earthquakes decreases for increasing values of

  13. Seismic Safety Margins Research Program. Regional relationships among earthquake magnitude scales

    SciTech Connect

    Chung, D. H.; Bernreuter, D. L.

    1980-05-01

    The seismic body-wave magnitude m{sub b} of an earthquake is strongly affected by regional variations in the Q structure, composition, and physical state within the earth. Therefore, because of differences in attenuation of P-waves between the western and eastern United States, a problem arises when comparing m{sub b}'s for the two regions. A regional m/sub b/ magnitude bias exists which, depending on where the earthquake occurs and where the P-waves are recorded, can lead to magnitude errors as large as one-third unit. There is also a significant difference between m{sub b} and M{sub L} values for earthquakes in the western United States. An empirical link between the m{sub b} of an eastern US earthquake and the M{sub L} of an equivalent western earthquake is given by M{sub L} = 0.57 + 0.92(m{sub b}){sub East}. This result is important when comparing ground motion between the two regions and for choosing a set of real western US earthquake records to represent eastern earthquakes. 48 refs., 5 figs., 2 tabs.

  14. Complex Nucleation Process of Large North Chile Earthquakes, Implications for Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Meneses, G.; Sobiesiak, M.; Madariaga, R. I.

    2014-12-01

    We studied the nucleation process of Northern Chile events that included the large earthquakes of Tocopilla 2007 Mw 7.8 and Iquique 2014 Mw 8.1, as well as the background seismicity recorded from 2011 to 2013 by the ILN temporary network and the IPOC and CSN permanent networks. We built our catalogue of 393 events starting from the CSN catalogue, which has a completeness of magnitude Mw > 3.0 in Northern Chile. We re-located and computed moment magnitude for each event. We also computed Early Warning (EW) parameters - Pd, Pv, τc and IV2 - for each event including 13 earthquakes of Mw>6.0 that occurred between 2007-2012. We also included part of the seismicity from March-April 2014 period. We find that Pd, Pv and IV2 are good estimators of magnitude for interplate thrust and intraplate intermediate depth events with Mw between 4.0 and 6.0. However, the larger magnitude events show a saturation of the EW parameters. The Tocopilla 2007 and Iquique 2014 earthquake sequences were studied in detail. Almost all events with Mw>6.0 present precursory signals so that the largest amplitudes occur several seconds after the first P wave arrival. The recent Mw 8.1 Iquique 2014 earthquake was preceded by low amplitude P waves for 20 s before the main asperity was broken. The magnitude estimation can improve if we consider longer P wave windows in the estimation of EW parameters. There was, however, a practical limit during the Iquique earthquake because the first S waves arrived before the arrival of the P waves from the main rupture. The 4 s P-wave Pd parameter estimated Mw 7.1 for the Mw 8.1 Iquique 2014 earthquake and Mw 7.5 for the Mw 7.8 Tocopilla 2007 earthquake.

  15. Geological observations on large earthquakes along the Himalayan frontal fault near Kathmandu, Nepal

    NASA Astrophysics Data System (ADS)

    Wesnousky, Steven G.; Kumahara, Yasuhiro; Chamlagain, Deepak; Pierce, Ian K.; Karki, Alina; Gautam, Dipendra

    2017-01-01

    The 2015 Gorkha earthquake produced displacement on the lower half of a shallow decollement that extends 100 km south, and upward from beneath the High Himalaya and Kathmandu to where it breaks the surface to form the trace of the Himalayan Frontal Thrust (HFT), leaving unruptured the shallowest ∼50 km of the decollement. To address the potential of future earthquakes along this section of the HFT, we examine structural, stratigraphic, and radiocarbon relationships in exposures created by emplacement of trenches across the HFT where it has produced scarps in young alluvium at the mouths of major rivers at Tribeni and Bagmati. The Bagmati site is located south of Kathmandu and directly up dip from the Gorkha rupture, whereas the Tribeni site is located ∼200 km to the west and outside the up dip projection of the Gorkha earthquake rupture plane. The most recent rupture at Tribeni occurred 1221-1262 AD to produce a scarp of ∼7 m vertical separation. Vertical separation across the scarp at Bagmati registers ∼10 m, possibly greater, and formed between 1031-1321 AD. The temporal constraints and large displacements allow the interpretation that the two sites separated by ∼200 km each ruptured simultaneously, possibly during 1255 AD, the year of a historically reported earthquake that produced damage in Kathmandu. In light of geodetic data that show ∼20 mm/yr of crustal shortening is occurring across the Himalayan front, the sum of observations is interpreted to suggest that the HFT extending from Tribeni to Bagmati may rupture simultaneously, that the next great earthquake near Kathmandu may rupture an area significantly greater than the section of HFT up dip from the Gorkha earthquake, and that it is prudent to consider that the HFT near Kathmandu is well along in a strain accumulation cycle prior to a great thrust earthquake, most likely much greater than occurred in 2015.

  16. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  17. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  18. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    NASA Astrophysics Data System (ADS)

    Noda, Shunta; Ellsworth, William L.

    2016-09-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  19. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  20. Systematic detection of seismic activity before recent large earthquakes in China

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Wang, B.; Ruan, X.; Meng, X.; Hongwei, T.; Long, F.; Su, J.

    2014-12-01

    Sometimes large shallow earthquakes are preceded by increased local seismic activity, known as "foreshocks". However, the exact relationship between foreshocks and mainshock nucleation is still in debate. Several studies have found accelerating or migrating foreshock activity right before recent large earthquakes along major plate boundary faults, indicating that foreshocks are likely driven by slow-slip events. However, it is still not
clear whether similar features could be observed for earthquakes that occur away from plate-boundary regions.
Here we conduct a systematic detection of possible foreshock activity around the times of 6 recent large earthquakes in China.
The candidate events include the 2008 Ms7.3 Yutian, Ms8.0 Wenchuan, the 2010 Ms7.0 Yushu,
the 2013 Ms7.0 Lushan, the 2014 Ms7.3 Yutian, and the 2014 Ms6.5 Zhaotong earthquakes. Among them, the 2010 Yushu and 2014 Yutian mainshocks had clear evidence of M4-5 immediate foreshocks listed in regional earthquake catalogs, while the rest
did not. In each case, we use waveforms of local earthquakes listed in the catalog as templates and scan through continuous waveforms recorded by both permanent and temporary seismic stations around the epicentral region of each mainshock. Our waveform matching method can detect at least a few times more events than listed in the catalog. Our initial results show a wide range of behaviors. For the 2010 Yushu and 2014 Yutian cases, the M4-5 foreshocks were followed by many smaller-size events that could be considered as their aftershocks. For the Wenchuan case, we did not observe any obvious foreshock in the immediate vicinity of the epicenter. However, we found one swarm sequence that shows systematic migration a few months before the Wenchuan mainshock. Our next step is to relocate these newly detected events to search for spatio-temporal evolutions before each mainshock, as well
as performing Epidemic Type Aftershock Sequence (ETAS) modeling to examine

  1. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  2. Unusually large earthquakes inferred from tsunami deposits along the Kuril trench

    USGS Publications Warehouse

    Nanayama, F.; Satake, K.; Furukawa, R.; Shimokawa, K.; Atwater, B.F.; Shigeno, K.; Yamaki, S.

    2003-01-01

    The Pacific plate converges with northeastern Eurasia at a rate of 8-9 m per century along the Kamchatka, Kuril and Japan trenches. Along the southern Kuril trench, which faces the Japanese island of Hokkaido, this fast subduction has recurrently generated earthquakes with magnitudes of up to ???8 over the past two centuries. These historical events, on rupture segments 100-200 km long, have been considered characteristic of Hokkaido's plate-boundary earthquakes. But here we use deposits of prehistoric tsunamis to infer the infrequent occurrence of larger earthquakes generated from longer ruptures. Many of these tsunami deposits form sheets of sand that extend kilometres inland from the deposits of historical tsunamis. Stratigraphic series of extensive sand sheets, intercalated with dated volcanic-ash layers, show that such unusually large tsunamis occurred about every 500 years on average over the past 2,000-7,000 years, most recently ???350 years ago. Numerical simulations of these tsunamis are best explained by earthquakes that individually rupture multiple segments along the southern Kuril trench. We infer that such multi-segment earthquakes persistently recur among a larger number of single-segment events.

  3. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  4. Ground motions characterized by a multi-scale heterogeneous earthquake model

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ide, Satoshi

    2014-12-01

    We have carried out numerical simulations of seismic ground motion radiating from a mega-earthquake whose rupture process is governed by a multi-scale heterogeneous distribution of fracture energy. The observed complexity of the Mw 9.0 2011 Tohoku-Oki earthquake can be explained by such heterogeneities with fractal patches (size and number), even without introducing any heterogeneity in the stress state. In our model, scale dependency in fracture energy (i.e., the slip-weakening distance D c) on patch size is essential. Our results indicate that wave radiation is generally governed by the largest patch at each moment and that the contribution from small patches is minor. We then conducted parametric studies on the frictional parameters of peak ( τ p) and residual ( τ r) friction to produce the case where the effect of the small patches is evident during the progress of the main rupture. We found that heterogeneity in τ r has a greater influence on the ground motions than does heterogeneity in τ p. As such, local heterogeneity in the static stress drop (Δ τ) influences the rupture process more than that in the stress excess (Δ τ excess). The effect of small patches is particularly evident when these are almost geometrically isolated and not simultaneously involved in the rupture of larger patches. In other cases, the wave radiation from small patches is probably hidden by the major contributions from large patches. Small patches may play a role in strong motion generation areas with low τ r (high Δ τ), particularly during slow average rupture propagation. This effect can be identified from the differences in the spatial distributions of peak ground velocities for different frequency ranges.

  5. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  6. Large-scale multimedia modeling applications

    SciTech Connect

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  7. Large-scale synthesis of peptides.

    PubMed

    Andersson, L; Blomberg, L; Flegel, M; Lepsa, L; Nilsson, B; Verlander, M

    2000-01-01

    Recent advances in the areas of formulation and delivery have rekindled the interest of the pharmaceutical community in peptides as drug candidates, which, in turn, has provided a challenge to the peptide industry to develop efficient methods for the manufacture of relatively complex peptides on scales of up to metric tons per year. This article focuses on chemical synthesis approaches for peptides, and presents an overview of the methods available and in use currently, together with a discussion of scale-up strategies. Examples of the different methods are discussed, together with solutions to some specific problems encountered during scale-up development. Finally, an overview is presented of issues common to all manufacturing methods, i.e., methods used for the large-scale purification and isolation of final bulk products and regulatory considerations to be addressed during scale-up of processes to commercial levels. Copyright 2000 John Wiley & Sons, Inc. Biopolymers (Pept Sci) 55: 227-250, 2000

  8. Scale invariance of shallow seismicity and the prognostic signatures of earthquakes

    NASA Astrophysics Data System (ADS)

    Stakhovsky, I. R.

    2017-08-01

    The results of seismic investigations based on methods of the theory of nonequilibrium processes and self-similarity theory have shown that a shallow earthquake can be treated as a critical transition that occurs during the evolution of a non-equilibrium seismogenic system and is preceded by phenomena such as the scale invariance of spatiotemporal seismic structures. The implication is that seismicity can be interpreted as a purely multifractal process. Modeling the focal domain as a fractal cluster of microcracks allows formulating the prognostic signatures of earthquakes actually observed in seismic data. Seismic scaling permits monitoring the state of a seismogenic system as it approaches instability.

  9. Evidence for earthquake triggering of large landslides in coastal Oregon, USA

    USGS Publications Warehouse

    Schulz, W.H.; Galloway, S.L.; Higgins, J.D.

    2012-01-01

    Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement rates, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great earthquakes every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or earthquake-induced ground motion using field observations and modeling of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement rates suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia earthquake was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support earthquake ground motion as their triggering mechanism. Limit-equilibrium slope-stability modeling suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. Modeling suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone earthquake would trigger formation of the rockslides. Displacement modeling following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff

  10. Relationships Within the Precurors Before Large Earthquakes: Theory, Observations and Data Chosen

    NASA Astrophysics Data System (ADS)

    Mingyu, J.

    2015-12-01

    The Non-Critical Precursory Accelerating Seismicity Theory has been partly tested (with the method of RTL) about the same spatiotemporal window of both accelerating seismicity and quiescence, which may relate the two of the principal patterns that can precede large earthquakes. The Accelerating Moment/Strain Releasing model could be seen as the loading process between the reduced seismicity after the last huge earthquakes around the epicenter and the next rupture. So the AMR model depends on the occurrence of the quiescence. Here, we develop an approach based on the concept of stress accumulation to unify and categorize all claimed seismic precursors in a same physical framework, including the quiescence, the AMR/ASR model and the short-term activation. It shows that different precursory paths are possible before large earthquakes and can be described by a logic tree with combined criteria at a given stress state and of precursory silent slip on the fault and within the faults system. Theoretical results are then compared to the time series observed prior to the earthquake catalog from 1980 to 2012, Italy. In the initial result, the case of the 2009 Mw = 6.3 L'Aquila earthquake, Italy, the observed precursory path is coupling of quiescence and accelerating seismic release, followed by activation. What's more, the comparison between ETAS and Stress Accumulation Model shows that precursors are statistic significant when microseismicity is considered, which holds important information on the stress loading state of the crust surrounding active faults. Based on the corrected Akaike Information Criterion (AICc), we found that quiescence and ASR signals are significant when events below a magnitude of 2.2 are included and that short-term activation is significant when events below 3.3 are included. These results provide guidelines for future research on earthquake regional risk assessment and some kind of predictability.

  11. Automated determination of magnitude and source length of large earthquakes using backprojection and P wave amplitudes

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Kawakatsu, Hitoshi; Zhuang, Jiancang; Mori, Jim; Maeda, Takuto; Tsuruoka, Hiroshi; Zhao, Xu

    2017-06-01

    Fast estimates of magnitude and source extent of large earthquakes are fundamental for disaster mitigation. However, resolving these estimates within 10-20 min after origin time remains challenging. Here we propose a robust algorithm to resolve magnitude and source length of large earthquakes using seismic data recorded by regional arrays and global stations. We estimate source length and source duration by backprojecting seismic array data. Then the source duration and the maximum amplitude of the teleseismic P wave displacement waveforms are used jointly to estimate magnitude. We apply this method to 74 shallow earthquakes that occurred within epicentral distances of 30-85° to Hi-net (2004-2014). The estimated magnitudes are similar to moment magnitudes estimated from W-phase inversions (U.S. Geological Survey), with standard deviations of 0.14-0.19 depending on the global station distributions. Application of this method to multiple regional seismic arrays could benefit tsunami warning systems and emergency response to large global earthquakes.

  12. Rapid determination of P-wave-based Energy Magnitude: Insights on source parameter scaling of the 2016 Central Italy earthquake sequence

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Bindi, Dino; Brondi, Piero; Di Giacomo, Domenico; Parolai, Stefano; Zollo, Aldo

    2017-04-01

    In this study, we proposed a novel methodology for the rapid estimation of the earthquake size from the seismic radiated energy. Two relationships have been calibrated using recordings from 29 earthquakes of the 2009 L'Aquila and the 2012 Emilia seismic sequences in Italy. The first relation allows obtaining seismic radiated energy ER estimates using as proxy the time integral of squared P-waves velocities measured over vertical components, including regional attributes for describing the attenuation with distance. The second relation is a regression between the local magnitude and the radiated energy, which allows defining an energy-based local magnitude (MLe) compatible with ML for small earthquakes. We have applied the new procedure to the seismic sequence that struck central Italy in 2016. Scaling relationships involving seismic moment and radiated energy are discussed considering the Mw 6.0 Amatrice, Mw 5.9 Ussita and Mw 6.5 Norcia earthquakes and their ML >4 aftershocks, in total 38 events. The Mw 6.0 Amatrice earthquake presents the highest apparent stress, and the observed differences among the three larger shocks highlight the dynamic heterogeneity with which large earthquakes can occur in central Italy. Differences between MLe and Mw measures allows to identify events characterized by a higher amount of energy transferred to seismic waves, providing important constraints for the real-time evaluation of an earthquake shaking potential.

  13. “PLAFKER RULE OF THUMB” RELOADED: EXPERIMENTAL INSIGHTS INTO THE SCALING AND VARIABILITY OF LOCAL TSUNAMIS TRIGGERED BY GREAT SUBDUCTION MEGATHRUST EARTHQUAKES

    NASA Astrophysics Data System (ADS)

    Rosenau, M.; Nerlich, R.; Brune, S.; Oncken, O.

    2009-12-01

    along accretionary margins. Three out of the top-five tsunami hotspots we identify had giant earthquakes in the last decades (Chile 1960, Alaska 1964, Sumatra-Andaman 2004) and one (Sumatra-Mentawai) started in 2005 releasing strain in a possibly moderate mode of sequential large earthquakes. This leaves Cascadia as the major active tsunami hotspot in the focus of tsunami hazard assessment. Visualization of preliminary versions of the experimentally-derived scaling laws for peak nearshore tsunami heigth (PNTH) as functions of forearc slope, peak earthquake slip (left panel) and moment magnitude (right panel). Note that wave breaking is not considered yet. This renders the extrem peaks > 20 m unrealistic.

  14. Anomalous pre-seismic transmission of VHF-band radio waves resulting from large earthquakes, and its statistical relationship to magnitude of impending earthquakes

    NASA Astrophysics Data System (ADS)

    Moriya, T.; Mogi, T.; Takada, M.

    2010-02-01

    To confirm the relationship between anomalous transmission of VHF-band radio waves and impending earthquakes, we designed a new data-collection system and have documented the anomalous VHF-band radio-wave propagation beyond the line of sight prior to earthquakes since 2002 December in Hokkaido, northern Japan. Anomalous VHF-band radio waves were recorded before two large earthquakes, the Tokachi-oki earthquake (Mj = 8.0, Mj: magnitude defined by the Japan Meteorological Agency) on 2003 September 26 and the southern Rumoi sub-prefecture earthquake (Mj = 6.1) on 2004 December 14. Radio waves transmitted from a given FM radio station are considered to be scattered, such that they could be received by an observation station beyond the line of sight. A linear relationship was established between the logarithm of the total duration time of anomalous transmissions (Te) and the magnitude (M) or maximum seismic intensity (I) of the impending earthquake, for M4-M5 class earthquakes that occurred at depths of 48-54 km beneath the Hidaka Mountains in Hokkaido in 2004 June and 2005 August. Similar linear relationships are also valid for earthquakes that occurred at different depths. The relationship was shifted to longer Te for shallower earthquakes and to shorter Te for deeper ones. Numerous parameters seem to affect Te, including hypocenter depths and surface conditions of epicentral area (i.e. sea or land). This relationship is important because it means that pre-seismic anomalous transmission of VHF-band waves may be useful in predicting the size of an impending earthquake.

  15. Magnitudes and Moment-Duration Scaling of Low-Frequency Earthquakes Beneath Southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A.; Rubin, A. M.; Savard, G.; Chuang, L. Y.

    2015-12-01

    We employ 130 low-frequency-earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from 100's to 1000's of individual LFEs, representing over 300,000 independent detections from major episodic-tremor-and- slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P- and S-waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatio-temporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free-surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single ETS template. The spatio-temporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 hours of LFE activity during an ETS episode when tidal sensitity is low. The remainder is released in bursts over several days, particularly as spatially extensive RTRs, during which tidal sensitivity is high. RTR's are characterized by large magnitude LFEs, and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power-law than exponential distributions although they exhibit very high b-values ≥ 6. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges MW<1.5, MW≥ 2.0. LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in dimension and that moment variation is dominated by slip. This behaviour implies

  16. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    NASA Astrophysics Data System (ADS)

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-11-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present.

  17. Long-period ocean-bottom motions in the source areas of large subduction earthquakes.

    PubMed

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-11-30

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present.

  18. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    PubMed Central

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10–20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  19. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  20. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  1. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  2. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  3. Discrete Scaling in Earthquake Precursory Phenomena: Evidence in the Kobe Earthquake, Japan

    NASA Astrophysics Data System (ADS)

    Johansen, Anders; Sornette, Didier; Wakita, Hiroshi; Tsunogai, Urumu; Newman, William I.; Saleur, Hubert

    1996-10-01

    We analyze the ion concentration of groundwater issuing from deep wells located near the epicenter of the recent earthquake of magnitude 6.9 near Kobe, Japan, on January 17, 1995. These concentrations are well fitted by log-periodic modulations around a leading power law. The exponent (real and imaginary parts) is very close to those already found for the fits of precursory seismic activity for Loma Prieta and the Aleutian Islands. This brings further support for the general hypothesis that complex critical exponents are a general phenomenon in irreversible self-organizing systems and particularly in rupture and earthquake phenomena. Nous analysons les fluctuations de concentrations ioniques de l'eau issue de puits profonds situés à proximité de l'épicentre du récent tremblement de terre de magnitude 6.9 proche de Kobe au Japon, le 17 janvier 1995. Ces fluctuations sont bien reproduites par des modulations log-périodiques autour d'une loi de puissance. Les parties réelle et imaginaire de l'exposant sont très proches de celles trouvées précédemment pour les tremblements de terre de Loma Prieta et des Iles Aléoutiennes. Ces résultats renforcent l'hypothèse que des exposants critiques complexes sont une propriété générale des phénomènes de croissance irréversible, et en particulier des problèmes de rupture et des tremblements de terre.

  4. Do Large Earthquakes Penetrate below the Seismogenic Zone? Potential Clues from Microseismicity

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Lapusta, N.

    2012-12-01

    It is typically assumed that slip in large earthquakes is confined within the seismogenic zone - often defined by the extent of the background seismicity - with regions below creeping. In terms of rate-and-state friction properties, the locked seismogenic zone and the deeper creeping fault extensions are velocity-weakening (VW) and velocity-strengthening (VS), respectively. Recently, it has been hypothesized that earthquake rupture could penetrate into the deeper creeping regions (Shaw and Wesnousky, BSSA, 2008), and yet it is difficult to detect the deep slip due to limited resolution of source inversions with depth. We hypothesize that absence of concentrated microseismicity at the bottom of the seismogenic zone may point to the existence of deep-penetrating earthquake ruptures. The creeping-locked boundary creates strain and stress concentrations. If it is at the bottom of the VW region, which supports earthquake nucleation, microseismicity should persistently occur at the bottom of the seismogenic zone. Such behavior has been observed on the Parkfield segment of the San Andreas Fault (SAF) and the Calaveras fault. However, such microseismicity would be inhibited if dynamic earthquake rupture penetrates substantially below the VW/VS transition, which would drop stress in the ruptured VS areas, making them effectively locked. Hence the creeping-locked boundary, with its stress concentration, would be located within the VS area, where earthquake nucleation is inhibited. Indeed, microseismicity concentration at the bottom of the seismogenic zone is not observed for several faults that hosted major earthquakes, such as the Carizzo segment of the SAF (the site of 1857 Mw 7.9 Fort Tejon earthquake) and Palu-Lake-Hazar segment of the Eastern Anatolian Fault. We confirm this hypothesis by simulating earthquake sequences and aseismic slip in 3D fault models (Lapusta and Liu, 2009; Noda and Lapusta, 2010). The fault is governed by rate-and-state friction laws, with a VW

  5. High S-wave attenuation anomalies and ringlike seismogenic structures in the lithosphere beneath Altai: Possible precursors of large earthquakes

    NASA Astrophysics Data System (ADS)

    Kopnichev, Yu. F.; Sokolova, I. N.

    2016-12-01

    This paper addresses inhomogeneities in the short-period S-wave attenuation field in the lithosphere beneath Altai. A technique based on the analysis of the amplitude ratios of Sn and Pn waves is used. High S-wave attenuation areas are identified in the West Altai, which are related to the source zones of recent large earthquakes, viz., the 1990 Zaisan earthquake and the 2003 Chuya earthquake. Associated with the Chuya earthquake, a large ringlike seismogenic structure had been formed since 1976. It is also found that ringlike seismogenic structures are confined to high S-wave attenuation areas unrelated to large historical earthquakes. It is supposed that processes paving the way for strong earthquakes are taking place in these areas. The magnitudes of probable earthquakes are estimated using the earlier derived correlation dependences of the sizes of ringlike seismogenic structures and the threshold values of magnitudes on the energy of principal earthquakes with prevailing focal mechanisms taken into consideration. The sources of some earthquakes are likely to occur near to the planned gas pipeline route from Western Siberia to China, which should be taken into account. The relationship of anomalies in the S-wave attenuation field and the ringlike seismogenic structures to a high content of deep-seated fluids in the lithosphere is discussed.

  6. Correlation between Coulomb stress changes imparted by large historical strike-slip earthquakes and current seismicity in Japan

    NASA Astrophysics Data System (ADS)

    Ishibe, Takeo; Shimazaki, Kunihiko; Tsuruoka, Hiroshi; Yamanaka, Yoshiko; Satake, Kenji

    2011-03-01

    To determine whether current seismicity continues to be affected by large historical earthquakes, we investigated the correlation between current seismicity in Japan and the static stress changes in the Coulomb Failure Function (ΔCFF) due to eight large historical earthquakes (since 1923, magnitude ≥ 6.5) with a strike-slip mechanism. The ΔCFF was calculated for two types of receiver faults: the mainshock and the focal mechanisms of recent moderate earthquakes. We found that recent seismicity for the mainshock receiver faults is concentrated in the positive ΔCFF regions of four earthquakes (the 1927 Tango, 1943 Tottori, 1948 Fukui, and 2000 Tottori-Ken Seibu earthquakes), while no such correlations are recognizable for the other four earthquakes (the 1931 Nishi-Saitama, 1963 Wakasa Bay, 1969 Gifu-Ken Chubu, and 1984 Nagano-Ken Seibu earthquakes). The probability distribution of the ΔCFF calculated for the recent focal mechanisms clearly indicates that recent earthquakes concentrate in positive ΔCFF regions, suggesting that the current seismicity may be affected by a number of large historical earthquakes. The proposed correlation between the ΔCFF and recent seismicity may be affected by multiple factors controlling aftershock activity or decay time.

  7. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide

    USGS Publications Warehouse

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland

    2012-01-01

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  8. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  9. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide.

    PubMed

    Pollitz, Fred F; Stein, Ross S; Sevilgen, Volkan; Bürgmann, Roland

    2012-10-11

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days, but so far remote aftershocks of moment magnitude M ≥ 5.5 have not been identified, with the lone exception of an M = 6.9 quake remotely triggered by the surface waves from an M = 6.6 quake 4,800 kilometres away. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M ≥ 5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M ≤ 7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10(-7) for at least 100 seconds during dynamic-wave passage. The other M ≥ 8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M ≥ 5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  10. Relationship between the seismic scale of the 2011 northeast Japan earthquake and the incidence of acute myocardial infarction: A population-based study.

    PubMed

    Tanaka, Fumitaka; Makita, Shinji; Ito, Tomonori; Onoda, Toshiyuki; Sakata, Kiyomi; Nakamura, Motoyuki

    2015-06-01

    Previous studies have reported a relationship between large earthquakes and acute coronary events, but have yielded conflicting results. On March 11, 2011, a massive magnitude 9.0 earthquake hit the northeastern coast of Japan and generated repeated aftershocks. The aim of this study is to clarify the influence of this earthquake on the risk of acute myocardial infarction (AMI) including sudden cardiac death based on the data from a population-based analysis. The study subjects were residents in the northeast of Iwate prefecture, Japan. Cases corresponding to the definition of AMI according to the criteria of the World Health Organization MONICA project were registered from 4 weeks before to 8 weeks after the disaster and in the corresponding periods in 2009 and 2010. The relative risk of AMI was 2.03 (95% CI 1.55-2.66) for the 4-week period after the disaster compared with the corresponding periods in the preceding years. The number of events peaked within the first week after the earthquake decreased to levels seen in the preceding years and then increased again after high-magnitude aftershocks. The incidence of AMI was positively correlated with the seismic scale of the earthquake (r = 0.75, P < .01). This population-based study suggests that the increase in AMI events after a major earthquake varies depending on the seismic scale of the initial shock and each aftershock. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The large-scale isolated disturbances dynamics in the main peak of electronic concentration of ionosphere

    NASA Astrophysics Data System (ADS)

    Kalinin, U. K.; Romanchuk, A. A.; Sergeenko, N. P.; Shubin, V. N.

    2003-07-01

    The vertical sounding data at chains of ionosphere stations are used to obtain relative variations of electron concentration in the F2 ionosphere region. Specific isolated traveling large-scale irregularities are distinguished in the diurnal succession of the fcF2 relative variations records. The temporal shifts of the irregularities at the station chains determine their motion velocity (of the order of speed of sound) and spatial scale (of order of 3000-5000km, the trajectory length being up to 10000km). The motion trajectories of large-scale isolated irregularities which had preceded the earthquakes are reconstructed.

  12. Neutrino footprint in large scale structure

    NASA Astrophysics Data System (ADS)

    Garay, Carlos Peña; Verde, Licia; Jimenez, Raul

    2017-03-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.

  13. Scale up of large ALON windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Balasubramanian, Sreeram; Kashalikar, Uday; Foti, Robyn; Sastri, Suri

    2013-06-01

    Aluminum Oxynitride (ALON® Optical Ceramic) combines broadband transparency with excellent mechanical properties. ALON's cubic structure means that it is transparent in its polycrystalline form, allowing it to be manufactured by conventional powder processing techniques. Surmet has established a robust manufacturing process, beginning with synthesis of ALON® powder, continuing through forming/heat treatment of blanks, and ending with optical fabrication of ALON® windows. Surmet has made significant progress in our production capability in recent years. Additional scale up of Surmet's manufacturing capability, for larger sizes and higher quantities, is currently underway. ALON® transparent armor represents the state of the art in protection against armor piercing threats, offering a factor of two in weight and thickness savings over conventional glass laminates. Tiled and monolithic windows have been successfully produced and tested against a range of threats. Large ALON® window are also of interest to a range of visible to Mid-Wave Infra-Red (MWIR) sensor applications. These applications often have stressing imaging requirements which in turn require that these large windows have optical characteristics including excellent homogeneity of index of refraction and very low stress birefringence. Surmet is currently scaling up its production facility to be able to make and deliver ALON® monolithic windows as large as ~19x36-in. Additionally, Surmet has plans to scale up to windows ~3ftx3ft in size in the coming years. Recent results with scale up and characterization of the resulting blanks will be presented.

  14. LARGE EARTHQUAKES AND TSUNAMIS AT THE SAMOA CORNER IN THE CONTEXT OF THE 2009 SAMOA EVENT

    NASA Astrophysics Data System (ADS)

    Okal, E.; Kirby, S. H.

    2009-12-01

    We examine the seismic properties of the 2009 Samoa earthquake in the context of its tsunami, the first one in 45 years to cause significant damage on U.S. soil. The event has a normal faulting geometry near the bend ending the 3000-km long Tonga-Kermadec subduction zone. Other large normal faulting tsunamigenic earthquakes include the 1933 Sanriku, 1977 Sumba and 2007 Kuril events. The 2009 Samoa earthquake shares with such intraplate earthquakes a slightly above average E/M0 (THETA = -4.82), but has a more complex geometry, a relatively long duration, and large CLVD (11%). Same-day seismicity appears detached to the SW of the fault plane, and 7 out of the 8 CMT regional solutions following the main shock are rotated at least 69 deg. away from its own mechanism. This points out to a mechanism of stress transfer rather than genuine aftershocks, in a pattern reminiscent of the 1933 Sanriku earthquake. Most of the seismic moment release around the Samoa corner involves normal faulting. To the South (16.5-18 deg. S; 1975, 1978, 1987, 2006), solutions consistently feature a typical intraplate lithospheric break. To the NW (15.5 deg. S), the 1981 event features a tear in the plate along Govers and Wortel's [2005] STEP model. The 2009 event is more complex, apparently involving rupture along a quasi-NS plane. An event presumably similar to 2009 took place on 26 June 1917, for which there is a report of a 12-m tsunami at Pago Pago. That event relocates 200 km to the NW, but its error ellipse includes the 2009 epicenter. The 1917 moment, tentatively 1.3 10**28 dyn*cm, is comparable to 2009. As suggested by Solov'ev and Go [1984], the report of a 12-m wave in Samoa during the 01 May 1917 Kermadec earthquake is most probably erroneous. We will present studies of the other large earthquakes of the past century in the area, notably the confirmed tsunamigenic events of 01 Sep. 1981 (damage on Savaii), 26 Dec 1975 (24 cm at PPG), 02 Apr 1977 (12 cm at PPG), 06 Oct 1987 and 07

  15. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  16. Scaling of Seismic Moment with Recurrence Interval for Small Repeating Earthquakes Simulated on Rate-and-State Faults

    NASA Astrophysics Data System (ADS)

    Chen, T.; Lapusta, N.

    2006-12-01

    Observations suggest that the recurrence time T and seismic moment M0 of small repeating earthquakes in Parkfield scale as T∝ M_0^{0.17 (Nadeau and Johnson, 1998). However, a simple conceptual model of these earthquakes as circular ruptures with stress drop independent of the seismic moment and slip that is proportional to the recurrence time T results in T∝ M_0^{1/3}. Several explanations for this discrepancy have been proposed. Nadeau and Johnson (1998) suggested that stress drop depends on the seismic moment and is much higher for small events than typical estimates based on seismic spectra. Sammis and Rice (2001) modeled repeating earthquakes at a border between large locked and creeping patches to get T∝ M_0^{1/6} and reasonable stress drops. Beeler et al. (2001) considered a fixed-area patch governed by a conceptual law that incorporated strain-hardening and showed that aseismic slip on the patch can explain the observed scaling relation. In this study, we provide an alternative physical basis, grounded in laboratory-derived rate and state friction laws, for the idea of Beeler at el. (2001) that much of the overall slip at the places of small repeating earthquakes may be accumulated aseismically. We simulate repeating events in a 3D model of a strike-slip fault imbedded into an elastic space and governed by rate and state friction laws. The fault has a small circular patch (2-20 m in diameter) with steady-state rate-weakening properties, with the rest of the fault governed by steady-state rate strengthening. The simulated fault segment is 40 m by 40 m, with periodic boundary conditions. We use values of rate and state parameters typical of laboratory experiments, with characteristic slip of order several microns. The model incorporates tectonic-like loading equivalent to the plate rate of 23 mm/year and all dynamic effects during unstable sliding. Our simulations use the 3D methodology of Liu and Lapusta (AGU, 2005) and fully resolve all aspects of

  17. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  18. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    USGS Publications Warehouse

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  19. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  20. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the

  1. A large silent earthquake and the future rupture of the Guerrero seismic

    NASA Astrophysics Data System (ADS)

    Kostoglodov, V.; Lowry, A.; Singh, S.; Larson, K.; Santiago, J.; Franco, S.; Bilham, R.

    2003-04-01

    The largest global earthquakes typically occur at subduction zones, at the seismogenic boundary between two colliding tectonic plates. These earthquakes release elastic strains accumulated over many decades of plate motion. Forecasts of these events have large errors resulting from poor knowledge of the seismic cycle. The discovery of slow slip events or "silent earthquakes" in Japan, Alaska, Cascadia and Mexico provides a new glimmer of hope. In these subduction zones, the seismogenic part of the plate interface is loading not steadily as hitherto believed, but incrementally, partitioning the stress buildup with the slow slip events. If slow aseismic slip is limited to the region downdip of the future rupture zone, slip events may increase the stress at the base of the seismogenic region, incrementing it closer to failure. However if some aseismic slip occurs on the future rupture zone, the partitioning may significantly reduce the stress buildup rate (SBR) and delay a future large earthquake. Here we report characteristics of the largest slow earthquake observed to date (Mw 7.5), and its implications for future failure of the Guerrero seismic gap, Mexico. The silent earthquake began in October 2001 and lasted for 6-7 months. Slow slip produced measurable displacements over an area of 550x250 km2. Average slip on the interface was about 10 cm and the equivalent magnitude, Mw, was 7.5. A shallow subhorizontal configuration of the plate interface in Guererro is a controlling factor for the physical conditions favorable for such extensive slow slip. The total coupled zone in Guerrero is 120-170 km wide while the seismogenic, shallowest portion is only 50 km. This future rupture zone may slip contemporaneously with the deeper aseismic sleep, thereby reducing SBR. The slip partitioning between seismogenic and transition coupled zones may diminish SBR up to 50%. These two factors are probably responsible for a long (at least since 1911) quiet on the Guerrero seismic gap

  2. A new paradigm for large earthquakes in stable continental plate interiors

    NASA Astrophysics Data System (ADS)

    Calais, E.; Camelbeeck, T.; Stein, S.; Liu, M.; Craig, T. J.

    2016-10-01

    Large earthquakes within stable continental regions (SCR) show that significant amounts of elastic strain can be released on geological structures far from plate boundary faults, where the vast majority of the Earth's seismic activity takes place. SCR earthquakes show spatial and temporal patterns that differ from those at plate boundaries and occur in regions where tectonic loading rates are negligible. However, in the absence of a more appropriate model, they are traditionally viewed as analogous to their plate boundary counterparts, occurring when the accrual of tectonic stress localized at long-lived active faults reaches failure threshold. Here we argue that SCR earthquakes are better explained by transient perturbations of local stress or fault strength that release elastic energy from a prestressed lithosphere. As a result, SCR earthquakes can occur in regions with no previous seismicity and no surface evidence for strain accumulation. They need not repeat, since the tectonic loading rate is close to zero. Therefore, concepts of recurrence time or fault slip rate do not apply. As a consequence, seismic hazard in SCRs is likely more spatially distributed than indicated by paleoearthquakes, current seismicity, or geodetic strain rates.

  3. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.

  4. Limit of strain partitioning in the Himalaya marked by large earthquakes in western Nepal

    NASA Astrophysics Data System (ADS)

    Murphy, M. A.; Taylor, M. H.; Gosse, J.; Silver, C. R. P.; Whipp, D. M.; Beaumont, C.

    2014-01-01

    Great earthquakes and high seismic risk in the Himalaya are thought to be focussed near the range front, where the Indian Plate slides beneath the mountain range. However, the Himalaya is curved and plate convergence becomes increasingly oblique westwards. Strain in the western Himalaya is hypothesized to be partitioned, such that western parts move northwestwards with respect to the central Himalaya. Here we use field data to identify a 63-km-long earthquake rupture on a previously unrecognized fault in the western Himalaya, far from the range front. We use radiocarbon dating to show that one or more earthquakes created 10m of surface displacement on the fault between AD 1165 and 1400. During this time interval, large range-front earthquakes also occurred. We suggest that the active fault we identified is part of a larger fault system, the Western Nepal Fault System, which cuts obliquely across the Himalaya. We combine our observations with a geodynamical model to show that the Western Nepal Fault System marks the termination of the strain-partitioned region of the western Himalaya and comprises a first-order structure in the three-dimensional displacement field of the mountain range. Our findings also identify a potential seismic hazard within the interior of the Himalaya that may necessitate significant changes to seismic hazard assessments.

  5. A New Paradigm for Large Earthquakes in Stable Continental Plate Interiors

    NASA Astrophysics Data System (ADS)

    Calais, Eric; Camelbeeck, Thierry; Stein, Seth; Liu, Mian; Craig, Tim

    2017-04-01

    Large earthquakes within stable continental regions (SCR) show that significant amounts of elastic strain can be released on geological structures far from plate boundary faults, where the vast majority of the Earth's seismic activity takes place. SCR earthquakes show spatial and temporal patterns that differ from those at plate boundaries and occur in regions where tectonic loading rates are negligible. However, in the absence of a more appropriate model, they are traditionally viewed as analogous to their plate boundary counterparts, occuring when the accrual of tectonic stress localized at long-lived active faults reaches failure threshold. Here we argue that SCR earthquakes are better explained by transient perturbations of local stress or fault strength that release elastic energy from a pre-stressed lithosphere. As a result, SCR earthquakes can occur in regions with no previous seismicity and no surface evidence for strain accumulation. They need not repeat, since the tectonic loading rate is close to zero. Therefore, concepts of recurrence time or fault slip rate do not apply. As a consequence, seismic hazard in SCRs is likely more spatially distributed than indicated by paleoearthquakes, current seismicity, or geodetic strain rates.

  6. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  7. Modeling Human Behavior at a Large Scale

    DTIC Science & Technology

    2012-01-01

    Discerning intentions in dynamic human action. Trends in Cognitive Sciences , 5(4):171 – 178, 2001. Shirli Bar-David, Israel Bar-David, Paul C. Cross, Sadie...Limits of predictability in human mobility. Science , 327(5968):1018, 2010. S.A. Stouffer. Intervening opportunities: a theory relating mobility and...Modeling Human Behavior at a Large Scale by Adam Sadilek Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

  8. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  9. Documenting large earthquakes similar to the 2011 Tohoku-oki earthquake from sediments deposited in the Japan Trench over the past 1500 years

    NASA Astrophysics Data System (ADS)

    Ikehara, Ken; Kanamatsu, Toshiya; Nagahashi, Yoshitaka; Strasser, Michael; Fink, Hiske; Usami, Kazuko; Irino, Tomohisa; Wefer, Gerold

    2016-07-01

    The 2011 Tohoku-oki earthquake and tsunami was the most destructive geohazard in Japanese history. However, little is known of the past recurrence of large earthquakes along the Japan Trench. Deep-sea turbidites are potential candidates for understanding the history of such earthquakes. Core samples were collected from three thick turbidite units on the Japan Trench floor near the epicenter of the 2011 event. The uppermost unit (Unit TT1) consists of amalgamated diatomaceous mud (30-60 cm thick) that deposited from turbidity currents triggered by shallow subsurface instability on the lower trench slope associated with strong ground motion during the 2011 Tohoku-oki earthquake. Older thick turbidite units (Units TT2 and TT3) also consist of several amalgamated subunits that contain thick sand layers in their lower parts. Sedimentological characteristics and tectonic and bathymetric settings of the Japan Trench floor indicate that these turbidites also originated from two older large earthquakes of potentially similar to the 2011 Tohoku-oki earthquake. A thin tephra layer between Units TT2 and TT3 constrains the age of these earthquakes. Geochemical analysis of volcanic glass shards within the tephra layer indicates that it is correlative to the Towada-a tephra (AD 915) from the Towada volcano in northeastern Japan. The stratigraphy of the Japan Trench turbidites resembles that of onshore tsunami deposits on the Sendai and Ishinomaki plains, indicating that the cored uppermost succession of the Japan Trench comprises a 1500-yr-old record that includes the sedimentary fingerprint of the historical Jogan earthquake of AD 869.

  10. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  11. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  12. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    NASA Astrophysics Data System (ADS)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  13. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  14. The SCEC-USGS Dynamic Earthquake Rupture Code Comparison Exercise - Simulations of Large Earthquakes and Strong Ground Motions

    NASA Astrophysics Data System (ADS)

    Harris, R.

    2015-12-01

    I summarize the progress by the Southern California Earthquake Center (SCEC) and U.S. Geological Survey (USGS) Dynamic Rupture Code Comparison Group, that examines if the results produced by multiple researchers' earthquake simulation codes agree with each other when computing benchmark scenarios of dynamically propagating earthquake ruptures. These types of computer simulations have no analytical solutions with which to compare, so we use qualitative and quantitative inter-code comparisons to check if they are operating satisfactorily. To date we have tested the codes against benchmark exercises that incorporate a range of features, including single and multiple planar faults, single rough faults, slip-weakening, rate-state, and thermal pressurization friction, elastic and visco-plastic off-fault behavior, complete stress drops that lead to extreme ground motion, heterogeneous initial stresses, and heterogeneous material (rock) structure. Our goal is reproducibility, and we focus on the types of earthquake-simulation assumptions that have been or will be used in basic studies of earthquake physics, or in direct applications to specific earthquake hazard problems. Our group's goals are to make sure that when our earthquake-simulation codes simulate these types of earthquake scenarios along with the resulting simulated strong ground shaking, that the codes are operating as expected. For more introductory information about our group and our work, please see our group's overview papers, Harris et al., Seismological Research Letters, 2009, and Harris et al., Seismological Research Letters, 2011, along with our website, scecdata.usc.edu/cvws.

  15. Large scale preparation of pure phycobiliproteins.

    PubMed

    Padgett, M P; Krogmann, D W

    1987-01-01

    This paper describes simple procedures for the purification of large amounts of phycocyanin and allophycocyanin from the cyanobacterium Microcystis aeruginosa. A homogeneous natural bloom of this organism provided hundreds of kilograms of cells. Large samples of cells were broken by freezing and thawing. Repeated extraction of the broken cells with distilled water released phycocyanin first, then allophycocyanin, and provides supporting evidence for the current models of phycobilisome structure. The very low ionic strength of the aqueous extracts allowed allophycocyanin release in a particulate form so that this protein could be easily concentrated by centrifugation. Other proteins in the extract were enriched and concentrated by large scale membrane filtration. The biliproteins were purified to homogeneity by chromatography on DEAE cellulose. Purity was established by HPLC and by N-terminal amino acid sequence analysis. The proteins were examined for stability at various pHs and exposures to visible light.

  16. Mittigating the effects of large subduction-zone earthquakes in Western Sumatra

    NASA Astrophysics Data System (ADS)

    Sieh, K.; Stebbins, C.; Natawidjaja, D. H.; Suwargadi, B. W.

    2004-12-01

    No giant earthquakes have struck the outer-arc islands of western Sumatra since the sequence of 1797, 1833 and 1861. Paleoseismic studies of coral microatolls reveal that failure of the subduction interface occurs in clusters of such earthquakes about every 230 years. Thus, the next such sequence may well be no more than a few decades away. In the meantime, GPS measurements and paleogeodetic observations show that the islands continue to submerge, dragged down by the downgoing oceanic slab, in preparation for the next failures of the subduction interface. Uplift of the islands and seafloor one to two meters during large events leads to large tsunamis and substantial changes in the coastal environments of the islands, including the seaward retreat of fringing reef, beach and mangrove environments. Having spent a decade characterizing the seismic history of western coastal Sumatra, we are now beginning to work with the inhabitants of the islands and the mainland coast to mitigate the associated hazards. Thus far, we have begun to creat and distribute posters and brochures aimed at educating the islanders about their natural tectonic environment and guiding them in preparing for future large earthquakes and tsunamis. We are also installing a continuous GPS network, in order to monitor ongoing strain accumulation and possible transients.

  17. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  18. Numerical modeling of the deformations associated with large subduction earthquakes through the seismic cycle

    NASA Astrophysics Data System (ADS)

    Fleitout, L.; Trubienko, O.; Garaud, J.; Vigny, C.; Cailletaud, G.; Simons, W. J.; Satirapod, C.; Shestakov, N.

    2012-12-01

    A 3D finite element code (Zebulon-Zset) is used to model deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes: Sumatra, Japan and Chile. The mesh featuring a broad spherical shell portion with a viscoelastic asthenosphere is refined close to the subduction zones. The model is constrained by 6 years of postseismic data in Sumatra area and over a year of data for Japan and Chile plus preseismic data in the three areas. The coseismic displacements on the subduction plane are inverted from the coseismic displacements using the finite element program and provide the initial stresses. The predicted horizontal postseismic displacements depend upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. Non-dimensionalized by the coseismic displacements, they present an almost uniform value between 500km and 1500km from the trench for elastic plates 80km thick. The time evolution of the velocities is function of the creep law (Maxwell, Burger or power-law creep). Moreover, the forward models predict a sizable far-field subsidence, also with a spatial distribution which varies with the geometry of the asthenosphere and lithosphere. Slip on the subduction interface does not induce such a subsidence. The observed horizontal velocities, divided by the coseismic displacement, present a similar pattern as function of time and distance from trench for the three areas, indicative of similar lithospheric and asthenospheric thicknesses and asthenospheric viscosity. This pattern cannot be fitted with power-law creep in the asthenosphere but indicates a lithosphere 60 to 90km thick and an asthenosphere of thickness of the order of 100km with a burger rheology represented by a Kelvin-Voigt element with a viscosity of 3.1018Pas and μKelvin=μelastic/3. A second Kelvin-Voigt element with very limited amplitude may explain some characteristics of the short time-scale signal. The postseismic subsidence is

  19. Detecting remotely triggered microseismicity around Changbaishan Volcano following nuclear explosions in North Korea and large distant earthquakes around the world

    NASA Astrophysics Data System (ADS)

    Liu, Guoming; Li, Chenyu; Peng, Zhigang; Li, Xuemei; Wu, Jing

    2017-05-01

    We conduct a systematic survey on locally triggered earthquakes by large distant earthquakes in Changbaishan Volcano, an active intraplate volcano on the border between China and North Korea. We examine waveforms of distant earthquakes recorded at broadband station Changbaishan (CBS) near the volcano with estimated dynamic stresses over 5 kPa between 2000 and 2016. Out of 26 selected distant earthquakes, three of them show positive evidence of triggering during large-amplitude surface waves. The earthquakes that had positive or possible evidences of triggering generated larger long-period surface waves, indicating that they are more efficient in triggering microseismicity. In addition, since 2006 North Korea has conducted five underground nuclear explosion (UNE) tests only 140 km away from Changbaishan Volcano. By systematically examining waveforms of these UNEs recorded at station CBS, we find that none of them have triggered microearthquakes in Changbaishan Volcano.

  20. Chronology of historical tsunamis in Mexico and its relation to large earthquakes along the subduction zone

    NASA Astrophysics Data System (ADS)

    Suarez, G.; Mortera, C.

    2013-05-01

    The chronology of historical earthquakes along the subduction zone in Mexico spans a time period of approximately 400 years. Although the population density along the coast of Mexico has always been low, relative to that of central Mexico, several of the large subduction earthquakes reports include references to the presence of tsunamis invading the southern coast of Mexico. Here we present a chronology of historical tsunamis affecting the Pacific coast of Mexico and compare this with the historical record of subduction events and to the existing Mexican and worldwide catalogs of tsunamis in the Pacific basin. Due to the geographical orientation of the Pacific coat of Mexico, tsunamis generated on the other subduction zones of the Pacific have not had damaging effects in the country. Among the tsunamis generated by local earthquakes, the largest one by far is the one produced by the earthquake of 28 March 1787. The reported tsunami has an inundation area that reaches for over 6 km inland. The length of the coast where the tsunami was reported extends for over 450 km. In the last 100 years two large tsunamis have been reported along the Pacific coast of Mexico. On 22 June 1932 a tsunami with reported wave heights of up to 11 m hit the coast of Jalisco and Colima. The town of Cuyutlan was heavily damaged and approximately 50 people lost their lives do to the impact of the tsunami. This unusual tsunami was generated by an aftershock (M 6.9) of the large 3 June 1932 event (M 8.1). The main shock of 3 June did not produce a perceptible tsunami. It has been proposed that the 22 June event is a tsunami earthquake generated on the shallow part of the subduction zone. On 16 November 1925 an unusual tsunami was reported in the town of Zihuatanejo in the state of Guerrero, Mexico. No earthquake on the Pacific rim occurs at the same time as this tsunami and the historical record of hurricanes and tropical storms do not list the presence of a meteorological disturbance that

  1. Active fault slip and potential large magnitude earthquakes within the stable Kazakh Platform (Central Kazakhstan)

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J.; Walker, R. T.; Abdrakhmatov, K.; Campbell, G.; Mukambayev, A.; Rhodes, E.; Rood, D. H.

    2016-12-01

    The Tien Shan mountains of Central Asia are characterized at the present day by abundant range-bounding E-W thrust faults, and several major NW-SE striking right-lateral faults, which cut across the stable Kazakh Platform terminating at (or within) the Tien Shan. The various E-W thrust faults are associated with significant seismicity over the last few hundred years. In sharp contrast, the NW-SE right-lateral faults are not associated with any major historical earthquakes, and thus it remains unclear if these Paleozoic structures have been reactivated during the Late Cenozoic. The Dzhalair-Naiman fault (DNF) is one such fault, and is comprised of several fault segments striking NW-SE across the Central Kazakh Platform over a distance of 600+ km. Unlike similar NW-SE right-lateral faults in the region (e.g. Talas-Fergana and Dzhungarian faults), the DNF is confined to the Kazakh Platform and does not penetrate into the Tien Shan. Regional GPS velocities indicate slow (<2 mm/yr) deformation rates north of the Tien Shan, and rare, deep earthquakes in the Platform suggest that Platform-interior faults, such as the DNF, may have the potential to generate infrequent very large magnitude earthquakes. We investigate the Chokpar segment of the DNF (60+ km long), which lies 60 km north of Bishkek. We use Quaternary dating techniques (IRSL and 10Be exposure dating) to date several abandoned and incised alluvial fans which are now right-laterally displaced across the fault. Stream channels are offset by 30+ m (measured from a stereo Pleiades DEM and GPS survey data), while the terraces through which they cut were abandoned in the Mid-to-Late Holocene, suggesting a relatively high slip rate over the Late Quaternary (higher than expected from regional GPS velocities). However, given the potential for the DNF to slip in very large infrequent earthquakes (with 10+ m coseismic displacements), our slip-rate calculations may also be subject to additional errors related to the low

  2. Potential for Large Transpressional Earthquakes along the Santa Cruz-Catalina Ridge, California Continental Borderland

    NASA Astrophysics Data System (ADS)

    Legg, M.; Kohler, M. D.; Weeraratne, D. S.; Castillo, C. M.

    2015-12-01

    Transpressional fault systems comprise networks of high-angle strike-slip and more gently-dipping oblique-slip faults. Large oblique-slip earthquakes may involve complex ruptures of multiple faults with both strike-slip and dip-slip. Geophysical data including high-resolution multibeam bathymetry maps, multichannel seismic reflection (MCS) profiles, and relocated seismicity catalogs enable detailed mapping of the 3-D structure of seismogenic fault systems offshore in the California Continental Borderland. Seafloor morphology along the San Clemente fault system displays numerous features associated with active strike-slip faulting including scarps, linear ridges and valleys, and offset channels. Detailed maps of the seafloor faulting have been produced along more than 400 km of the fault zone. Interpretation of fault geometry has been extended to shallow crustal depths using 2-D MCS profiles and to seismogenic depths using catalogs of relocated southern California seismicity. We examine the 3-D fault character along the transpressional Santa Cruz-Catalina Ridge (SCCR) section of the fault system to investigate the potential for large earthquakes involving multi-fault ruptures. The 1981 Santa Barbara Island (M6.0) earthquake was a right-slip event on a vertical fault zone along the northeast flank of the SCCR. Aftershock hypocenters define at least three sub-parallel high-angle fault surfaces that lie beneath a hillside valley. Mainshock rupture for this moderate earthquake appears to have been bilateral, initiating at a small discontinuity in the fault geometry (~5-km pressure ridge) near Kidney Bank. The rupture terminated to the southeast at a significant releasing step-over or bend and to the northeast within a small (~10-km) restraining bend. An aftershock cluster occurred beyond the southeast asperity along the East San Clemente fault. Active transpression is manifest by reverse-slip earthquakes located in the region adjacent to the principal displacement zone

  3. Estimating High Frequency Energy Radiation of Large Earthquakes by Image Deconvolution Back-Projection

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2017-04-01

    With the recent establishment of regional dense seismic arrays (e.g., Hi-net in Japan, USArray in the North America), advanced digital data processing has enabled improvement of back-projection methods that have become popular and are widely used to track the rupture process of moderate to large earthquakes. Back-projection methods can be classified into two groups, one using time domain analyses, and the other frequency domain analyses. There are minor technique differences in both groups. Here we focus on the back-projection performed in the time domain using seismic waveforms recorded at teleseismic distances (30-90 degree). For the standard back-projection (Ishii et al., 2005), teleseismic P waves that are recorded on vertical components of a dense seismic array are analyzed. Since seismic arrays have limited resolutions and we make several assumptions (e.g., only direct P waves at the observed waveforms, and every trace has completely identical waveform), the final images from back-projections show the stacked amplitudes (or correlation coefficients) that are often smeared in both time and space domains. Although it might not be difficult to reveal overall source processes for a giant seismic source such as the 2004 Mw 9.0 Sumatra earthquake where the source extent is about 1400 km (Ishii et al., 2005; Krüger and Ohrnberger, 2005), there are more problems in imaging detailed processes of earthquakes with smaller source dimensions, such as a M 7.5 earthquake with a source extent of 100-150 km. For smaller earthquakes, it is more difficult to resolve space distributions of the radiated energies. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to determine the sources of high frequency energy radiation by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a

  4. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  5. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; Marriage, Tobias; McMahon, Jeff; Miller, Nathan; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2017-01-01

    The Cosmology Large Angular Scale Surveryor (CLASS) is a ground based telescope array designed to measure the large-angular scale polarization signal of the Cosmic Microwave Background (CMB). The large-angular scale CMB polarization measurement is essential for a precise determination of the optical depth to reionization (from the E-mode polarization) and a characterization of inflation from the predicted polarization pattern imprinted on the CMB by gravitational waves in the early universe (from the B-mode polarization). CLASS will characterize the primordial tensor-to-scalar ratio, r, to 0.01 (95% CL).CLASS is uniquely designed to be sensitive to the primordial B-mode signal across the entire range of angular scales where it could possibly dominate over the lensing signal that converts E-modes to B-modes while also making multi-frequency observations both high and low of the frequency where the CMB-to-foreground signal ratio is at its maximum. The design enables CLASS to make a definitive cosmic-variance-limited measurement of the optical depth to scattering from reionization.CLASS is an array of 4 telescopes operating at approximately 40, 90, 150, and 220 GHz. CLASS is located high in the Andes mountains in the Atacama Desert of northern Chile. The location of the CLASS site at high altitude near the equator minimizes atmospheric emission while allowing for daily mapping of ~70% of the sky.A rapid front end Variable-delay Polarization Modulator (VPM) and low noise Transition Edge Sensor (TES) detectors allow for a high sensitivity and low systematic error mapping of the CMB polarization at large angular scales. The VPM, detectors and their coupling structures were all uniquely designed and built for CLASS.We present here an overview of the CLASS scientific strategy, instrument design, and current progress. Particular attention is given to the development and status of the Q-band receiver currently surveying the sky from the Atacama Desert and the development of

  6. More than 35 large earthquakes broke the Fucino faults (Central Italy) in the last 15 ka, as revealed from in situ 36Cl exposure dating

    NASA Astrophysics Data System (ADS)

    Benedetti, L. C.; Schlagenhauf, A.; Manighetti, I.; Gaudemer, Y.; Finkel, R. C.; Malavieille, J.; Bourles, D. L.

    2012-12-01

    Using 36Cl exposure dating, we recover the Holocene earthquake history of 7 of the large seismogenic normal faults that form the broad-scale Fucino fault system in Central Italy. It is the first time that seismic history is quantified at such a broad scale encompassing many interacting faults. Some of the Fucino faults have recently broken in two devastating large earthquakes, Avezano in 1915 (M ≈ 7), and L'Aquila in 2009 (Mw 6.3). We focus here on 7 major faults of the Fucino system, that form two distinct NNW-trending networks, Fucino North (FN analyzed faults: Velino-Magnola, VM; Campo-Felice, CF; Fiamigniano, FI) and Fucino South (FS analyzed faults: San Sebastiano, SB; Parasano, PR; Trasacco, TR), separated by the oblique Tre-Monti fault (TM). Each network includes a major fault (VM in FN; SB in FS) associated to secondary faults (CF, FI, PR, TR). We collected 800 samples from the well-preserved limestone scarps of the 7 faults and modeled their 36Cl concentrations to derive their exhumation history and hence quantify the slips and ages of the large earthquakes that produced the exhumations. We found that more than 35 large earthquakes broke the Fucino faults over the last ≈ 15 ka. Most of these earthquakes occurred during three 2-3 kyr long phases of paroxysmal seismic activity that struck the entire Fucino system, one at 12-9 ka, another one at 6-4 ka, and a more recent one at 2.5-0.5 ka. These phases of paroxysmal activity were separated by periods of relative quiescence. During each of the two oldest phases, all faults broke in several large earthquakes that clustered over a short time. During the most recent phase, only the FN faults broke. The Fiamigniano fault is the one to have ruptured most recently in the cluster, about 0.5 ka ago. This last event might have been the 1349 AD historical earthquake that destroyed the nearby villages. The largest faults of both the FN and FS systems broke in much shorter clusters (≈ 1-1.5 kyr) than the secondary

  7. Far-field Gravity and Tilt Signals by Large Earthquakes: Real or Instrumental Effects?

    NASA Astrophysics Data System (ADS)

    Berrino, Giovanna; Riccardi, Umberto

    A wide set of dynamics phenomena (i.e., Geodynamics, Post Glacial Rebound, seismicity and volcanic activity) can produce time gravity changes, which spectrum varies from short to long (more than 1 year) periods. The amplitude of the gravity variations is generally in the order of consequently their detection requires instruments with high sensitivity and stability: then, high quality experimental data. Spring and superconducting gravimeters are intensively used with this target and they are frequently jointed with tiltmeters recording stations in order to measure the elasto-gravitational perturbation of the Earth. The far-field effects produced by large earthquakes on records collected by spring gravimeters and tiltmeters are investigated here. Gravity and tilt records were analyzed on time windows spanning the occurrence of large worldwide earthquakes; the gravity records have been collected on two stations approximately 600 km distant. The background noise level at the stations was characterized, in each season, in order to detect a possible seasonal dependence and the presence of spectral components which could hide or mask other geophysical signals, such as, for instance, the highest mode of the Seismic Free Oscillation (SFO) of the Earth. Some spectral components (6.5' 8' 9' 14', 20', 51') have been detected in gravity and tilt records on the occasion of large earthquakes and the effect of the SFO has been hypothesized. A quite different spectral content of the EW and NS tiltmeter components has been detected and interpreted as a consequence of the radiation pattern of the disturbances due to the earthquakes. Through the analysis of the instrumental sensitivity, instrumental effects have been detected for gravity meters at very low frequency.

  8. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; McMahon, Jeff; Miller, Nathan T.; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián.; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2016-07-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  9. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  10. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  11. The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  12. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  13. Principles for selecting earthquake motions in engineering design of large dams

    USGS Publications Warehouse

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  14. Financial earthquakes, aftershocks and scaling in emerging stock markets

    NASA Astrophysics Data System (ADS)

    Selçuk, Faruk

    2004-02-01

    This paper provides evidence for scaling laws in emerging stock markets. Estimated parameters using different definitions of volatility show that the empirical scaling law in every stock market is a power law. This power law holds from 2 to 240 business days (almost 1 year). The scaling parameter in these economies changes after a change in the definition of volatility. This finding indicates that the stock returns may have a multifractal nature. Another scaling property of stock returns is examined by relating the time after a main shock to the number of aftershocks per unit time. The empirical findings show that after a major fall in the stock returns, the stock market volatility above a certain threshold shows a power law decay, described by Omori's law.

  15. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  16. Potential for a large earthquake rupture of the San Ramón fault in Santiago, Chile

    NASA Astrophysics Data System (ADS)

    Vargas Easton, G.; Klinger, Y.; Rockwell, T. K.; Forman, S. L.; Rebolledo, S.; Lacassin, R.; Armijo, R.

    2013-12-01

    The San Ramón fault is an active west-vergent thrust fault system located along the eastern border of Santiago, capital of Chile, at the foot of the main Andes Cordillera. This is part of the continental-scale West Andean Thrust, at the western slope of the Andean orogen. The fault system is constituted by fault segments in the order of 10-15 km length, evidenced by conspicuous 3-over 100 m height fault scarps systematically located along the fault trace. This evidence Quaternary faulting activity, which together with the geometry, structure and geochronological data support slip rate estimations in the order of ~0.4 mm/year. To probe seismic potential for the west flank of the Andes in front of Santiago, we excavated and analyzed a trench across a prominent-young fault scarp. Together with geochronological data from Optically Stimulated Luminiscence complemented by radiocarbon ages, our paleoseismic results demonstrate recurrent late Quaternary faulting along this structure, with nearly 5 m of displacement in each event. With the last large earthquake nearly 8,000-9,000 years ago and two ruptures within the past 17,000-19,000 years ago, the San Ramon fault appears ripe for another large earthquake up to M7.5 in the near future, making Santiago another major world city at significant risk.

  17. Large-scale optimization of neuron arbors

    NASA Astrophysics Data System (ADS)

    Cherniak, Christopher; Changizi, Mark; Won Kang, Du

    1999-05-01

    At the global as well as local scales, some of the geometry of types of neuron arbors-both dendrites and axons-appears to be self-organizing: Their morphogenesis behaves like flowing water, that is, fluid dynamically; waterflow in branching networks in turn acts like a tree composed of cords under tension, that is, vector mechanically. Branch diameters and angles and junction sites conform significantly to this model. The result is that such neuron tree samples globally minimize their total volume-rather than, for example, surface area or branch length. In addition, the arbors perform well at generating the cheapest topology interconnecting their terminals: their large-scale layouts are among the best of all such possible connecting patterns, approaching 5% of optimum. This model also applies comparably to arterial and river networks.

  18. Voids in the Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    El-Ad, Hagai; Piran, Tsvi

    1997-12-01

    Voids are the most prominent feature of the large-scale structure of the universe. Still, their incorporation into quantitative analysis of it has been relatively recent, owing essentially to the lack of an objective tool to identify the voids and to quantify them. To overcome this, we present here the VOID FINDER algorithm, a novel tool for objectively quantifying voids in the galaxy distribution. The algorithm first classifies galaxies as either wall galaxies or field galaxies. Then, it identifies voids in the wall-galaxy distribution. Voids are defined as continuous volumes that do not contain any wall galaxies. The voids must be thicker than an adjustable limit, which is refined in successive iterations. In this way, we identify the same regions that would be recognized as voids by the eye. Small breaches in the walls are ignored, avoiding artificial connections between neighboring voids. We test the algorithm using Voronoi tesselations. By appropriate scaling of the parameters with the selection function, we apply it to two redshift surveys, the dense SSRS2 and the full-sky IRAS 1.2 Jy. Both surveys show similar properties: ~50% of the volume is filled by voids. The voids have a scale of at least 40 h-1 Mpc and an average -0.9 underdensity. Faint galaxies do not fill the voids, but they do populate them more than bright ones. These results suggest that both optically and IRAS-selected galaxies delineate the same large-scale structure. Comparison with the recovered mass distribution further suggests that the observed voids in the galaxy distribution correspond well to underdense regions in the mass distribution. This confirms the gravitational origin of the voids.

  19. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Tsuji, Takeshi; Ishibashi, Jun’Ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-02-01

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations.

  20. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake

    PubMed Central

    Tsuji, Takeshi; Ishibashi, Jun’ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-01-01

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations. PMID:28218298

  1. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake.

    PubMed

    Tsuji, Takeshi; Ishibashi, Jun'ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-02-20

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations.

  2. FINITE FAULT MODELING OF FUTURE LARGE EARTHQUAKE FROM NORTH TEHRAN FAULT IN KARAJ, IRAN

    NASA Astrophysics Data System (ADS)

    Samaei, Meghdad; Miyajima, Masakatsu; Saffari, Hamid; Tsurugi, Masato

    The main purpose of this study is to predict strong ground motions from future large earthquake for Karaj city, the capital of Alborz province of Iran. This city is an industrialized city having over one million populations and is located near several active faults. Finite fault modeling with a dynamic corner frequency has adopted here for simulation of future large earthquake. Target fault is North Tehran fault with the length of 110 km and rupture of west part of the fault which is closest to Karaj, assumed for this simulation. For seven rupture starting points, acceleration time series in the site of Karaj Caravansary -historical building- are predicted. Peak ground accelerations for those are vary from 423 cm/s2 to 584 cm/s2 which is in the range of 1990 Rudbar earthquake (Mw=7.3) . Results of acceleration simulations in different distances are also compared with attenuation relations for two types of soil. Our simulations show general agreement with one of the most well known world attenuation relations and also with one of the newest attenuation relation that hase developed for Iranian plateau.

  3. Variation of Earthquake Scaling (Dmax-L) with Long-Term Fault Evolution

    NASA Astrophysics Data System (ADS)

    Manighetti, I.; Cotton, F.; Campillo, M.

    2005-12-01

    A critical issue in earthquake mechanics is to be capable of determining the maximum displacement and magnitude than can be produced on a fault of known dimensions. Available measurements of length (L) and maximum displacement (Dmax) on faults broken during past earthquakes are dense enough for the Dmax-L scaling relationship to be re-examined (after Wells and Coppersmith, 1994). We do that here on the basis of the Dmax-L data compiled by Manighetti et al. (2005). Because most co-seismic slip distributions are triangular in shape on average (see previous reference), Dmax = 2*Dmean, so that any conclusion drawn from Dmax-L scaling applies to Dmean-L scaling. Following Miller (2002), we hypothesize that the relationship between Dmax and L depends on fault strength. We further assume that the strength of a fault depends on its long-term evolution, i.e., a fault being slipping for long ('mature') is weaker than a recently formed fault ('immature'). With these hypotheses in mind, we identify the faults having broken in the considered past earthquakes, document their long-term evolution, and examine how the Dmax-L scaling varies both along these faults and from one fault to the other. Taken together, the Dmax-L data define four major trends, with very few data in between. Along each trend, Dmax/L is roughly constant (at most equal per trend to 2, 4, 6-8, and 10-20.10-5) although slightly decreasing with length along the steepest trends. Earthquakes falling on the two lowest trends pertain to mature, lithospheric scale-faults, being rather plate boundaries on the lowest trend (North Anatolian and San Andreas, subduction zones), and intra-plate faults on the trend due 'above' (Kunlun, Xian Shui He [but SE tip], Bolnai, W tip of North Anatolian, Main Zagros Recent thrust). By contrast, earthquakes falling on the two steepest trends pertain to younger, smaller and less mature faults (Tien Shan, normal Tibet faults). Most of these faults are found at the tip (SE tip of Xian

  4. The Validity and Reliability Work of the Scale That Determines the Level of the Trauma after the Earthquake

    ERIC Educational Resources Information Center

    Tanhan, Fuat; Kayri, Murat

    2013-01-01

    In this study, it was aimed to develop a short, comprehensible, easy, applicable, and appropriate for cultural characteristics scale that can be evaluated in mental traumas concerning earthquake. The universe of the research consisted of all individuals living under the effects of the earthquakes which occurred in Tabanli Village on 23.10.2011 and…

  5. Fault Interactions and Large Complex Earthquakes in the Los Angeles Area

    USGS Publications Warehouse

    Anderson, G.; Aagaard, B.; Hudnut, K.

    2003-01-01

    Faults in complex tectonic environments interact in various ways, including triggered rupture of one fault by another, that may increase seismic hazard in the surrounding region. We model static and dynamic fault interactions between the strike-slip and thrust fault systems in southern California. We find that rupture of the Sierra Madre-Cucamonga thrust fault system is unlikely to trigger rupture of the San Andreas or San Jacinto strike-slip faults. However, a large northern San Jacinto fault earthquake could trigger a cascading rupture of the Sierra Madre-Cucamonga system, potentially causing a moment magnitude 7.5 to 7.8 earthquake on the edge of the Los Angeles metropolitan region.

  6. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  7. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  8. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  9. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  10. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  11. Large-scale Heterogeneous Network Data Analysis

    DTIC Science & Technology

    2012-07-31

    Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo).  Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster)  Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio).  Hsun-Ping Hsieh , Cheng-Te Li, and Shou

  12. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  13. Primer design for large scale sequencing.

    PubMed Central

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-01-01

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects. PMID:9611248

  14. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  15. Primer design for large scale sequencing.

    PubMed

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-06-15

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects.

  16. Large-scale ATLAS production on EGEE

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Campana, S.; Walker, R.

    2008-07-01

    In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall wall time efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.

  17. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  18. Heterogeneous slip distribution on faults responsible for large earthquakes: characterization and implications for tsunami modelling

    NASA Astrophysics Data System (ADS)

    Baglione, Enrico; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano

    2017-04-01

    The fact that ruptures on the generating faults of large earthquakes are strongly heterogeneous has been demonstrated over the last few decades by a large number of studies. The effort to retrieve reliable finite-fault models (FFMs) for large earthquakes occurred worldwide, mainly by means of the inversion of different kinds of geophysical data, has been accompanied in the last years by the systematic collection and format homogenisation of the published/proposed FFMs for different earthquakes into specifically conceived databases, such as SRCMOD. The main aim of this study is to explore characteristic patterns of the slip distribution of large earthquakes, by using a subset of the FFMs contained in SRCMOD, covering events with moment magnitude equal or larger than 6 and occurred worldwide over the last 25 years. We focus on those FFMs that exhibit a single and clear region of high slip (i.e. a single asperity), which is found to represent the majority of the events. For these FFMs, it sounds reasonable to best-fit the slip model by means of a 2D Gaussian distributions. Two different methods are used (least-square and highest-similarity) and correspondingly two "best-fit" indexes are introduced. As a result, two distinct 2D Gaussian distributions for each FFM are obtained. To quantify how well these distributions are able to mimic the original slip heterogeneity, we calculate and compare the vertical displacements at the Earth surface in the near field induced by the original FFM slip, by an equivalent uniform-slip model, by a depth-dependent slip model, and by the two "best" Gaussian slip models. The coseismic vertical surface displacement is used as the metric for comparison. Results show that, on average, the best results are the ones obtained with 2D Gaussian distributions based on similarity index fitting. Finally, we restrict our attention to those single-asperity FFMs associated to earthquakes which generated tsunamis. We choose few events for which tsunami

  19. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  20. Scaling of earthquake rupture growth in the Parkfield area: Self-similar growth and suppression by the finite seismogenic layer

    NASA Astrophysics Data System (ADS)

    Uchide, Takahiko; Ide, Satoshi

    2010-11-01

    We propose a new framework on the scaling of earthquake rupture growth time history, and we scale the moment rate and the cumulative moment functions of earthquakes over a wide magnitude range (Mw 1.7-6.0) in Parkfield, California. The moment rate and the cumulative moment functions of the small and medium earthquakes (Mw 1.7-4.6) are derived by slip inversion analyses with the empirical Green's function technique. The moment rate functions of the investigated earthquakes, except the Mw 6.0 event, are similar to each other, increasing rapidly in the first half (growth stage) and decelerating in the latter half (decline stage). In the growth stage, the cumulative moment functions are approximated by Mo (t) [Nm] = 2 × 1017 (t [s])3 independent of the final size of the earthquakes. The proportionality of the cumulative moment to the cube of time implies self-similarity during earthquake rupture growth. In the decline stage, the cumulative moment function veers off the common rupture curve. The Mw 6.0 event also grows along the same rupture curve until 1 s, after which the cumulative moment function is proportional to time from the onset itself. This is because the finite seismogenic layer limits the vertical extent of dynamic rupture. Our method and results contribute to our understanding of earthquake source physics, especially on earthquake rupture growth processes, which may help to improve earthquake early warning techniques.

  1. Earthquake Interactions at Different Scales: an Example from Eastern California and Western Nevada, USA.

    NASA Astrophysics Data System (ADS)

    Verdecchia, A.; Carena, S.

    2015-12-01

    Earthquakes in diffuse plate boundaries occur in spatially and temporally complex patterns. The region east of the Sierra Nevada that encompasses the northern Eastern California Shear Zone (ECSZ), Walker Lane (WL), and the westernmost part of the Basin and Range province (B&R) is such a kind of plate boundary. In order to better understand the relationship between moderate-to major earthquakes in this area, we modeled the evolution of coseismic, postseismic and interseismic Coulomb stress changes (∆CFS) in this region at two different spatio-temporal scales. In the first example we examined seven historical and instrumental Mw ≥ 6 earthquakes that struck the region around Owens Valley (northern ECSZ) in the last 150 years. In the second example we expanded our study area to all of the northern ECSZ, WL and western B&R, examining seventeen paleoseismological and historical major surface-rupturing earthquakes (Mw ≥ 6.5) that occurred in the last 1400 years. We show that in both cases the majority of the studied events (100% in the first case and 80% in the second) are located in areas of combined coseismic and postseismic positive ∆CFS. This relationship is robust, as shown by control tests with random earthquake sequences. We also show that the White Mountain fault has accumulated up to 30 bars of total ∆CFS (coseismic + postseismic + interseismic) in the last 150 years, and the Hunter Mountain, Fish Lake Valley, Black Mountain, and Pyramid Lake faults have accumulated 40, 45, 54 and 37 bars respectively in the last 1400 years. Such values are comparable to the average stress drop in a major earthquake, and all these faults may be therefore close to failure.

  2. Foreshock patterns preceding large earthquakes in the subduction zone of Chile

    NASA Astrophysics Data System (ADS)

    Minadakis, George; Papadopoulos, Gerassimos A.

    2016-04-01

    Some of the largest earthquakes in the globe occur in the subduction zone of Chile. Therefore, it is of particular interest to investigate foreshock patterns preceding such earthquakes. Foreshocks in Chile were recognized as early as 1960. In fact, the giant (Mw9.5) earthquake of 22 May 1960, which was the largest ever instrumentally recorded, was preceded by 45 foreshocks in a time period of 33h before the mainshock, while 250 aftershocks were recorded in a 33h time period after the mainshock. Four foreshocks were bigger than magnitude 7.0, including a magnitude 7.9 on May 21 that caused severe damage in the Concepcion area. More recently, Brodsky and Lay (2014) and Bedford et al. (2015) reported on foreshock activity before the 1 April 2014 large earthquake (Mw8.2). However, 3-D foreshock patterns in space, time and size were not studied in depth so far. Since such studies require for good seismic catalogues to be available, we have investigated 3-D foreshock patterns only before the recent, very large mainshocks occurring on 27 February 2010 (Mw 8.8), 1 April 2014 (Mw8.2) and 16 September 2015 (Mw8.4). Although our analysis does not depend on a priori definition of short-term foreshocks, our interest focuses in the short-term time frame, that is in the last 5-6 months before the mainshock. The analysis of the 2014 event showed an excellent foreshock sequence consisting by an early-weak foreshock stage lasting for about 1.8 months and by a main-strong precursory foreshock stage that was evolved in the last 18 days before the mainshock. During the strong foreshock period the seismicity concentrated around the mainshock epicenter in a critical area of about 65 km mainly along the trench domain to the south of the mainshock epicenter. At the same time, the activity rate increased dramatically, the b-value dropped and the mean magnitude increased significantly, while the level of seismic energy released also increased. In view of these highly significant seismicity

  3. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  4. Using Micro-Scale Observations to Understand Large-Scale Geophysical Phenomena: Examples from Seismology and Mineral Physics

    NASA Astrophysics Data System (ADS)

    Lockridge, Jeffrey

    Earthquake faulting and the dynamics of subducting lithosphere are among the frontiers of geophysics. Exploring the nature, cause, and implications of geophysical phenomena requires multidisciplinary investigations focused at a range of spatial scales. Within this dissertation, I present studies of micro-scale processes using observational seismology and experimental mineral physics to provide important constraints on models for a range of large-scale geophysical phenomena within the crust and mantle. The Great Basin (GB) in the western U.S. is part of the diffuse North American-Pacific plate boundary. The interior of the GB occasionally produces large earthquakes, yet the current distribution of regional seismic networks poorly samples it. The EarthScope USArray Transportable Array provides unprecedented station density and data quality for the central GB. I use this dataset to develop an earthquake catalog for the region that is complete to M 1.5. The catalog contains small-magnitude seismicity throughout the interior of the GB. The spatial distribution of earthquakes is consistent with recent regional geodetic studies, confirming that the interior of the GB is actively deforming everywhere and all the time. Additionally, improved event detection thresholds reveal that swarms of temporally-clustered repeating earthquakes occur throughout the GB. The swarms are not associated with active volcanism or other swarm triggering mechanisms, and therefore, may represent a common fault behavior. Enstatite (Mg,Fe)SiO3 is the second most abundant mineral within subducting lithosphere. Previous studies suggest that metastable enstatite within subducting slabs may persist to the base of the mantle transition zone (MTZ) before transforming to high-pressure polymorphs. The metastable persistence of enstatite has been proposed as a potential cause for both deep-focus earthquakes and the stagnation of slabs at the base of the MTZ. I show that natural Al- and Fe-bearing enstatite

  5. Experimental insights into the scaling and variability of local tsunamis triggered by giant subduction megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Rosenau, Matthias; Nerlich, Rainer; Brune, Sascha; Oncken, Onno

    2010-09-01

    Giant subduction megathrust earthquakes of magnitude 9 and larger pose a significant tsunami hazard in coastal regions. In order to test and improve empirical tsunami forecast models and to explore the susceptibility of different subduction settings we here analyze the scaling of subduction earthquake-triggered tsunamis in the near field and their variability related to source heterogeneities. We base our analysis on a sequence of 50 experimentally simulated great to giant (Mw = 8.3-9.4) subduction megathrust earthquakes generated using an elastoplastic analog model. Experimentally observed surface deformation is translated to local tsunami runup using linear wave theory. We find that the intrinsic scaling of local tsunami runup is characterized by a linear relationship to peak earthquake slip, an exponential relationship to moment magnitude, and an inverse power law relationship to fore-arc slope. Tsunami variability is controlled by coseismic slip heterogeneity and strain localization within the fore-arc wedge and is characterized by a coefficient of variation Cv ˜ 0.5. Wave breaking modifies the scaling behavior of tsunamis triggered by the largest (Mw > 8.5) events in subduction settings with shallow dipping (<1-2°) fore-arc slopes, limits tsunami runup to <30 m, and reduces its variability to Cv ˜ 0.2. The resulting effective scaling relationships are validated against historical events and numerical simulations and reproduce empirical scaling relationships. The latter appear as robust and liberal estimates of runup up to magnitude Mw = 9.5. A global assessment of tsunami susceptibility suggests that accretionary plate margins are more prone to tsunami hazard than erosive margins.

  6. Internationalization Measures in Large Scale Research Projects

    NASA Astrophysics Data System (ADS)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  7. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  8. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  9. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  10. Large-scale Globally Propagating Coronal Waves.

    PubMed

    Warmuth, Alexander

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  11. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  12. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  13. Reawakening of large earthquakes in south central Chile: The 2016 Mw 7.6 Chiloé event

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Moreno, M.; Melnick, D.; del Campo, F.; Poli, P.; Baez, J. C.; Leyton, F.; Madariaga, R.

    2017-07-01

    On 25 December 2016, the Mw 7.6 Chiloé earthquake broke a plate boundary asperity in south central Chile near the center of the rupture zone of the Mw 9.5 Valdivia earthquake of 1960. To gain insight on decadal-scale deformation trends and their relation with the Chiloé earthquake, we combine geodetic, teleseismic, and regional seismological data. GPS velocities increased at continental scale after the 2010 Maule earthquake, probably due to a readjustment in the mantle flow and an apparently abrupt end of the viscoelastic mantle relaxation following the 1960 Valdivia earthquake. It also produced an increase in the degree of plate locking. The Chiloé earthquake occurred within the region of increased locking, breaking a circular patch of 15 km radius at 30 km depth, located near the bottom of the seismogenic zone. We propose that the Chiloé earthquake is a first sign of the seismic reawakening of the Valdivia segment, in response to the interaction between postseismic viscoelastic relaxation and changes of interseismic locking between Nazca and South America.

  14. Calibration of the Landsliding Numerical Model SLIPOS and Prediction of the Seismically Induced Erosion for Several Large Earthquakes Scenarios in New-Zealand

    NASA Astrophysics Data System (ADS)

    Jeandet, L.; Lague, D.; Steer, P.; Davy, P.; Quigley, M.

    2016-12-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. The scaling between earthquake magnitude and the volume of eroded material has been known for decades. However, the resulting geomorphic consequences, such as divide migration or valley infilling, are still poorly understood. Indeed, the prediction of the location of co-seismic landslides sources and deposits is a challenging issue and algorithms accounting for triggering of landslides by ground shaking are needed. Peak Ground Acceleration (PGA) has been shown to control at first order the spatial density of landslides. PGA can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. To test the accuracy of modelling seismic waves effect by a cohesion drop, we use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis (Culmann criteria) applied at local scale. The model is able to reproduce the landslide area-volume scaling relationship and the area-frequency distribution of natural landslides. Co-seismic landslide triggering is accounted for in SLIPOS by imposing a cohesion decrease. We calibrate the relationship between PGA and model cohesion via the landslides density. The spatial distribution of PGA is modeled using empirical relationships between earthquake source and PGA. We run the model on earthquake scenarios (Mw 6.5 to 8) applied to the Alpine fault of New-Zealand to predict the volume of landslides associated with large magnitude earthquakes. We demonstrate that simulating the effect of ground acceleration on landsliding using a cohesion drop leads to a realistic scaling between the volume of sediments and the earthquake magnitude.

  15. Improving Recent Large-Scale Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Cardoso, Rogerio Fernando; Ransom, S.

    2011-01-01

    Pulsars are unique in that they act as celestial laboratories for precise tests of gravity and other extreme physics (Kramer 2004). There are approximately 2000 known pulsars today, which is less than ten percent of pulsars in the Milky Way according to theoretical models (Lorimer 2004). Out of these 2000 known pulsars, approximately ten percent are known millisecond pulsars, objects used for their period stability for detailed physics tests and searches for gravitational radiation (Lorimer 2008). As the field and instrumentation progress, pulsar astronomers attempt to overcome observational biases and detect new pulsars, consequently discovering new millisecond pulsars. We attempt to improve large scale pulsar surveys by examining three recent pulsar surveys. The first, the Green Bank Telescope 350MHz Drift Scan, a low frequency isotropic survey of the northern sky, has yielded a large number of candidates that were visually inspected and identified, resulting in over 34.000 thousands candidates viewed, dozens of detections of known pulsars, and the discovery of a new low-flux pulsar, PSRJ1911+22. The second, the PALFA survey, is a high frequency survey of the galactic plane with the Arecibo telescope. We created a processing pipeline for the PALFA survey at the National Radio Astronomy Observatory in Charlottesville- VA, in addition to making needed modifications upon advice from the PALFA consortium. The third survey examined is a new GBT 820MHz survey devoted to find new millisecond pulsars by observing the target-rich environment of unidentified sources in the FERMI LAT catalogue. By approaching these three pulsar surveys at different stages, we seek to improve the success rates of large scale surveys, and hence the possibility for ground-breaking work in both basic physics and astrophysics.

  16. Suppression of large earthquakes by stress shadows: A comparison of Coulomb and rate-and-state failure

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.; Simpson, Robert W.

    1998-10-01

    Stress shadows generated by California's two most recent great earthquakes (1857 Fort Tejon and 1906 San Francisco) substantially modified 19th and 20th century earthquake history in the Los Angeles basin and in the San Francisco Bay area. Simple Coulomb failure calculations, which assume that earthquakes can be modeled as static dislocations in an elastic half-space, have done quite well at approximating how long the stress shadows, or relaxing effects, should last and at predicting where subsequent large earthquakes will not occur. There has, however, been at least one apparent exception to the predictions of such simple models. The 1911 M>6.0 earthquake near Morgan Hill, California, occurred at a relaxed site on the Calaveras fault. We examine how the more complex rate-and-state friction formalism based on laboratory experiments might have allowed the 1911 earthquake. Rate-and-state time-to-failure calculations are consistent with the occurrence of the 1911 event just 5 years after 1906 if the Calaveras fault was already close to failure before the effects of 1906. We also examine the likelihood that the entire 78 years of relative quiet (only four M≥6 earthquakes) in the bay area after 1906 is consistent with rate-and-state assumptions, given that the previous 7 decades produced 18 M≥6 earthquakes. Combinations of rate-and-state variables can be found that are consistent with this pattern of large bay area earthquakes, assuming that the rate of earthquakes in the 7 decades before 1906 would have continued had 1906 not occurred. These results demonstrate that rate-and-state offers a consistent explanation for the 78-year quiescence and the 1911 anomaly, although they do not rule out several alternate explanations.

  17. Study of the Seismic Cycle of large Earthquakes in central Peru: Lima Region

    NASA Astrophysics Data System (ADS)

    Norabuena, E. O.; Quiroz, W.; Dixon, T. H.

    2009-12-01

    Since historical times, the Peruvian subduction zone has been source of large and destructive earthquakes. The more damaging one occurred on May 30 1970 offshore Peru’s northern city of Chimbote with a death toll of 70,000 people and several hundred US million dollars in property damage. More recently, three contiguous plate interface segments in southern Peru completed their seismic cycle generating the 1996 Nazca (Mw 7.1), the 2001 Atico-Arequipa (Mw 8.4) and the 2007 Pisco (Mw 7.9) earthquakes. GPS measurements obtained between 1994-2001 by IGP-CIW an University of Miami-RSMAS on the central Andes of Peru and Bolivia were used to estimate their coseismic displacements and late stage of interseismic strain accumulation. However, we focus our interest in central Peru-Lima region, which with its about 9’000,000 inhabitants is located over a locked plate interface that has not broken with magnitude Mw 8 earthquakes since May 1940, September 1966 and October 1974. We use a network of 11 GPS monuments to estimate the interseismic velocity field, infer spatial variations of interplate coupling and its relation with the background seismicity of the region.

  18. Tsunamigenic Aftershocks From Large Strike-Slip Earthquakes: An Example From the November 16, 2000 Mw=8.0 New Ireland, Papua New Guinea, Earthquake

    NASA Astrophysics Data System (ADS)

    Geist, E.; Parsons, T.; Hirata, K.; Hirata, K.

    2001-12-01

    Two reverse mechanism earthquakes (M > 7) were triggered by the November 16, 2000 Mw=8.0 New Ireland (Papua New Guinea) left-lateral, strike-slip earthquake. The mainshock rupture initiated in the Bismarck Sea and propagated unilaterally to the southeast through the island of New Ireland and into the Solomon Sea. Although the mainshock caused a local seiche in the bay near Rabaul (New Britain) with a maximum runup of 0.9 m, the main tsunami observed on the south coast of New Britain, New Ireland, and Bougainville (maximum runup approximately 2.5-3 m), appears to have been caused by the Mw=7.4 aftershock 2.8 hours following the mainshock. It is unclear whether the second Mw=7.6 aftershock on November 17, 2000 (40 hours after the mainshock) also generated a tsunami. Analysis and modeling of the available tsunami information can constrain the source parameters of the tsunamigenic aftershock(s) and further elucidated the triggering mechanism. Preliminary stress modeling indicates that because the location of the first Mw=7.4 aftershock is located near the rupture termination of the mainshock, stress calculations are especially sensitive to the location of both ruptures and the assumed coefficient of friction. A similar example of a triggered tsunamigenic earthquake occurred following the 1812 Wrightwood (M ~7.5) earthquake in southern California as discussed by Deng and Sykes (1996, GRL, p. 1155-1158). In this case, they show that strike-slip rupture on the San Andreas fault produced coseismic stress changes that triggered the Santa Barbara Channel earthquake (M ~7.1), 13 days later. The mechanism for the Santa Barbara Channel event appears to have been an oblique thrust event. The November 2000 New Ireland earthquake sequence provides an important analog for studying the potential for tsunamigenic aftershocks following large San Andreas earthquakes in southern California.

  19. Disaster Metrics: Evaluation of de Boer's Disaster Severity Scale (DSS) Applied to Earthquakes.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; McCord, Caitlin M; Sherak, Raphael A G; Hsu, Edberdt B; Kelen, Gabor D

    2015-02-01

    Quantitative measurement of the medical severity following multiple-casualty events (MCEs) is an important goal in disaster medicine. In 1990, de Boer proposed a 13-point, 7-parameter scale called the Disaster Severity Scale (DSS). Parameters include cause, duration, radius, number of casualties, nature of injuries, rescue time, and effect on surrounding community. Hypothesis This study aimed to examine the reliability and dimensionality (number of salient themes) of de Boer's DSS scale through its application to 144 discrete earthquake events. A search for earthquake events was conducted via National Oceanic and Atmospheric Administration (NOAA) and US Geological Survey (USGS) databases. Two experts in the field of disaster medicine independently reviewed and assigned scores for parameters that had no data readily available (nature of injuries, rescue time, and effect on surrounding community), and differences were reconciled via consensus. Principle Component Analysis was performed using SPSS Statistics for Windows Version 22.0 (IBM Corp; Armonk, New York USA) to evaluate the reliability and dimensionality of the DSS. A total of 144 individual earthquakes from 2003 through 2013 were identified and scored. Of 13 points possible, the mean score was 6.04, the mode = 5, minimum = 4, maximum = 11, and standard deviation = 2.23. Three parameters in the DSS had zero variance (ie, the parameter received the same score in all 144 earthquakes). Because of the zero contribution to variance, these three parameters (cause, duration, and radius) were removed to run the statistical analysis. Cronbach's alpha score, a coefficient of internal consistency, for the remaining four parameters was found to be robust at 0.89. Principle Component Analysis showed uni-dimensional characteristics with only one component having an eigenvalue greater than one at 3.17. The 4-parameter DSS, however, suffered from restriction of scoring range on both parameter and scale levels. Jan de Boer

  20. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  1. Neotectonic architecture of Taiwan and its implications for future large earthquakes

    NASA Astrophysics Data System (ADS)

    Shyu, J. Bruce H.; Sieh, Kerry; Chen, Yue-Gau; Liu, Char-Shine

    2005-08-01

    The disastrous effects of the 1999 Chi-Chi earthquake in Taiwan demonstrated an urgent need for better knowledge of the island's potential earthquake sources. Toward this end, we have prepared a neotectonic map of Taiwan. The map and related cross sections are based upon structural and geomorphic expression of active faults and folds both in the field and on shaded relief maps prepared from a 40-m resolution digital elevation model, augmented by geodetic and seismologic data. The active tandem suturing and tandem disengagement of a volcanic arc and a continental sliver to and from the Eurasian continental margin have created two neotectonic belts in Taiwan. In the southern part of the orogen both belts are in the final stage of consuming oceanic crust. Collision and suturing occur in the middle part of both belts, and postcollisional collapse and extension dominate the island's northern and northeastern flanks. Both belts consist of several distinct neotectonic domains. Seven domains (Kaoping, Chiayi, Taichung, Miaoli, Hsinchu, Ilan, and Taipei) constitute the western belt, and four domains (Lutao-Lanyu, Taitung, Hualien, and Ryukyu) make up the eastern belt. Each domain is defined by a distinct suite of active structures. For example, the Chelungpu fault (source of the 1999 earthquake) and its western neighbor, the Changhua fault, are the principal components of the Taichung Domain, whereas both its neighboring domains, the Chiayi and Miaoli Domains, are dominated by major blind faults. In most of the domains the size of the principal active fault is large enough to produce future earthquakes with magnitudes in the mid-7 values.

  2. Spontaneous, large stick-slip events in rotary-shear experiments as analogous to earthquake rupture

    NASA Astrophysics Data System (ADS)

    Zu, Ximeng; Reches, Zeev

    2015-04-01

    Experimental stick-slips are commonly envisioned as laboratory analogues of the spontaneous faults slip during natural earthquakes (Brace & Byerlee, 1966). However, typical experimental stick-slips are tiny events of slip distances up to a few tens of microns. To close the gap between such events and natural earthquakes, we develop a new method that produces spontaneous stick-slips with large displacements on our rotary shear apparatus (Reches & Lockner, 2010). In this method, the controlling program continuously calculates the real-time power-density (PD = slip-velocity times shear stress) of the experimental fault. Then, a feedback loop modifies the slip-velocity to match the real-time PD with the requested PD. In this method, the stick-slips occur spontaneously while slip velocity and duration are not controlled by the operator. We present a series of tens stick-slip events along granite and diorite experimental faults with 0.0001-1.3 m of total slip and slip-velocity up to 0.45 m/s. Depending on the magnitude of the requested PD, we recognized three types of events: (1) Stick-slips with a nucleation slip that initiates ~0.1 sec before the main slip which is characterized by temporal increase of shear stress, normal stress, and fault dilation; (2) Events resembling slip-pulse behavior of abrupt acceleration and intense dynamic weakening and subsequent strength recovery; and (3) Small, creep events during quasi-continuous, low- velocity slip with tiny changes of stress and dilation. The energy-displacement catalog of types (1) and (2) events shows good agreement with previous slip-pulse experiments and natural earthquakes (Chang et al., 2012). The present experiments indicate that power-density control is a promising experimental approach for earthquake simulations.

  3. Short- and long-term earthquake triggering along the strike-slip Kunlun fault, China: Insights gained from the Ms 8.1 Kunlun earthquake and other modern large earthquakes

    NASA Astrophysics Data System (ADS)

    Xie, Chaodi; Lei, Xinglin; Wu, Xiaoping; Hu, Xionglin

    2014-03-01

    Following the 2001 Ms8.1 Kunlun earthquake, earthquake records of more than 10 years, in addition to more than one century's records of large earthquakes, provide us with a chance to examine short-term (days to a few years) and long-term (years to decades) seismic triggering following a magnitude ~ 8 continental earthquake along a very long strike-slip fault, the Kunlun fault system, located in northern Tibet, China. Based on the calculations of coseismic Coulomb stress changes (ΔCFS) from the larger earthquake and post-seismic stress changes due to viscoelastic stress relaxation in the lower crust and upper mantle, we examined the temporal evolution of seismic triggering. The ETAS (epidemic type aftershocks sequence) model shows that the seismic rate in the aftershock area over ~ 10 years was higher than the background seismicity before the mainshock. Moreover, we discuss long-term (years to decades) triggering and the evolution of stress changes for the sequence of five large earthquakes of M ≥ 7.0 that ruptured the Kunlun fault system since 1937. All subsequent events of M ≥ 7.0 occurred in the regions of positive accumulated ΔCFS. These results show that short-term (up to 200 days in our case) triggering along the strike-slip Kunlun fault is governed by coseismic stress changes, while long-term triggering is somewhat due to post-seismic Coulomb stress changes resulting from viscoelastic relaxation.

  4. Assessment of Ionospheric Anomaly Prior to the Large Earthquake: 2D and 3D Analysis in Space and Time for the 2011 Tohoku Earthquake (Mw9.0)

    NASA Astrophysics Data System (ADS)

    Hattori, Katsumi; Hirooka, Shinji; Han, Peng

    2016-04-01

    The ionospheric anomalies possibly associated with large earthquakes have been reported by many researchers. In this paper, Total Electron Content (TEC) and tomography analyses have been applied to investigate the spatial and temporal distributions of ionospheric electron density prior to the 2011 Off the Pacific Coast of Tohoku earthquake (Mw9.0). Results show significant TEC enhancements and an interesting three dimensional structure prior to the main shock. As for temporal TEC changes, the TEC value increases 3-4 days before the earthquake remarkably, when the geomagnetic condition was relatively quiet. In addition, the abnormal TEC enhancement area in space was stalled above Japan during the period. Tomographic results show that three dimensional distribution of electron density decreases around 250 km altitude above the epicenter (peak is located just the east-region of the epicenter) and increases the mostly entire region between 300 and 400 km.

  5. Efficient, large scale separation of coal macerals

    SciTech Connect

    Dyrkacz, G.R.; Bloomquist, C.A.A.

    1988-01-01

    The authors believe that the separation of macerals by continuous flow centrifugation offers a simple technique for the large scale separation of macerals. With relatively little cost (/approximately/ $10K), it provides an opportunity for obtaining quite pure macer