Science.gov

Sample records for nonconservative earthquake model

  1. Simulation models for conservative and nonconservative solute transport in streams

    USGS Publications Warehouse

    Runkel, R.L.

    1995-01-01

    Solute transport in streams is governed by a suite of hydrologic and chemical processes. Interactions between hydrologic processes and chemical reactions may be quantified through a combination of field-scale experimentation and simulation modeling. Two mathematical models that simulate conservative and nonconservative solute transport in streams are presented. A model for conservative solutes that considers One Dimensional Transport with Inflow and Storage (OTIS) may be used in conjunction with tracer-dilution methods to quantify hydrologic transport processes (advection, dispersion, lateral inflow and transient storage). For nonconservative solutes, a model known as OTEQ may be used to quantify chemical processes within the context of hydrologic transport. OTEQ combines the transport mechanisms in OTIS with a chemical equilibrium sub-model that considers complexation, precipitation/dissolution and sorption. OTEQ has been used to quantify processes affecting trace metals in two streams in the Rocky Mountains of Colorado, USA.

  2. Improvements in Nonconservative Force Modelling for TOPEX/POSEIDON

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Rowlands, David D.; Chinn, Douglas S.; Kubitschek, Daniel G.; Luthcke, Scott B.; Zelensky, Nikita B.; Born, George H.

    1999-01-01

    It was recognized prior to the launch of TOPEX/POSEIDON, that the most important source of orbit error other than the gravity field, was due to nonconservative force modelling. Accordingly, an intensive effort was undertaken to study the nonconservative forces acting on the spacecraft using detailed finite element modelling (Antreasian, 1992; Antreasian and Rosborough, 1992). However, this detailed modelling was not suitable for orbit determination, and a simplified eight plate "box-wing" model was developed that took into account the aggregate effect of the various materials and associated thermal properties of each spacecraft surface. The a priori model was later tuned post launch with actual tracking data [Nerem et al., 1994; Marshall and Luthcke, 1994; Marshall et al., 1995]. More recently, Kubitschek (1997] developed a newer box-wing model for TOPEX/POSEIDON, which included updated material properties, accounted for a solar array deflection, and modelled solar array warping due to thermal effects. We have used this updated model as a basis to retune the macromodel for TOPEX/POSEIDON, and report on preliminary results using at least 36 cycles (one year) of SLR and DORIS data in 1993.

  3. Phase Diagram and Density Large Deviations of a Nonconserving ABC Model

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Mukamel, D.

    2012-02-01

    The effect of particle-nonconserving processes on the steady state of driven diffusive systems is studied within the context of a generalized ABC model. It is shown that in the limit of slow nonconserving processes, the large deviation function of the overall particle density can be computed by making use of the steady-state density profile of the conserving model. In this limit one can define a chemical potential and identify first order transitions via Maxwell’s construction, similarly to what is done in equilibrium systems. This method may be applied to other driven models subjected to slow nonconserving dynamics.

  4. Further seismic properties of a spring-block earthquake model

    NASA Astrophysics Data System (ADS)

    Angulo-Brown, F.; Muñoz-Diosdado, A.

    1999-11-01

    Within the context of self-organized critical systems, Olami et al. (OFC) (1992) proposed a spring-block earthquake model. This model is non-conservative and reproduces some seismic properties such as the Gutenberg-Richter law for the size distribution of earthquakes. In this paper we study further seismic properties of the OFC model and we find the stair-shaped curves of the cumulative seismicity. We also find that in the long term these curves have a characteristic straight-line envelope of constant slope that works as an attractor of the cumulative seismicity, and that these slopes depend on the system size and cannot be arbitrarily large. Finally, we report that in the OFC model the recurrence time distribution for large events follows a log-normal behaviour for some non-conservation levels.

  5. An uncertainty inclusive un-mixing model to identify tracer non-conservativeness

    NASA Astrophysics Data System (ADS)

    Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.

    2015-04-01

    Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (<40°C), sieved (125 µm) and analysed for mineral magnetic susceptibility, anhysteretic remanence and iso-thermal remanence, and geochemical elements Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn (following microwave-assisted acid digestion). Discriminant analysis was used to reduce the number of tracer numbers before un-mixing. Tracer non-conservativeness

  6. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Marshall, J. A.

    1992-01-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  7. Modeling earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Charpentier, Arthur; Durand, Marilou

    2015-07-01

    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  8. Observational and energetics constraints on the non-conservation of potential/Conservative Temperature and implications for ocean modelling

    NASA Astrophysics Data System (ADS)

    Tailleux, Rémi

    2015-04-01

    This paper seeks to elucidate the fundamental differences between the nonconservation of potential temperature and that of Conservative Temperature, in order to better understand the relative merits of each quantity for use as the heat variable in numerical ocean models. The main result is that potential temperature is found to behave similarly to entropy, in the sense that its nonconservation primarily reflects production/destruction by surface heat and freshwater fluxes; in contrast, the nonconservation of Conservative Temperature is found to reflect primarily the overall compressible work of expansion/contraction. This paper then shows how this can be exploited to constrain the nonconservation of potential temperature and entropy from observed surface heat fluxes, and the nonconservation of Conservative Temperature from published estimates of the mechanical energy budgets of ocean numerical models. Finally, the paper shows how to modify the evolution equation for potential temperature so that it is exactly equivalent to using an exactly conservative evolution equation for Conservative Temperature, as was recently recommended by IOC et al. (2010). This result should in principle allow ocean modellers to test the equivalence between the two formulations, and to indirectly investigate to what extent the budget of derived nonconservative quantities such as buoyancy and entropy can be expected to be accurately represented in ocean models.

  9. Noether's theorem for non-conservative Hamilton system based on El-Nabulsi dynamical model extended by periodic laws

    NASA Astrophysics Data System (ADS)

    Long, Zi-Xuan; Zhang, Yi

    2014-11-01

    This paper focuses on the Noether symmetries and the conserved quantities for both holonomic and nonholonomic systems based on a new non-conservative dynamical model introduced by El-Nabulsi. First, the El-Nabulsi dynamical model which is based on a fractional integral extended by periodic laws is introduced, and El-Nabulsi—Hamilton's canonical equations for non-conservative Hamilton system with holonomic or nonholonomic constraints are established. Second, the definitions and criteria of El-Nabulsi—Noether symmetrical transformations and quasi-symmetrical transformations are presented in terms of the invariance of El-Nabulsi—Hamilton action under the infinitesimal transformations of the group. Finally, Noether's theorems for the non-conservative Hamilton system under the El-Nabulsi dynamical system are established, which reveal the relationship between the Noether symmetry and the conserved quantity of the system.

  10. Non-conservative GNSS satellite modeling: long-term orbit behavior

    NASA Astrophysics Data System (ADS)

    Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.; Sosnica, K.; Fritsche, M.

    2012-04-01

    Modeling of non-conservative forces is a key issue for precise orbit determination of GNSS satellites. Furthermore, mismodeling of these forces has the potential to explain orbit-related frequencies found in GPS-derived station coordinates and geocenter, as well as the observed bias in the SLR-GPS residuals. Due to the complexity of the non-conservative forces, usually they have been compensated by empirical models based on the real in-orbit behavior of the satellites. Recent studies have focused on the physical/analytical modeling of solar radiation pressure, Earth radiation pressure, thermal effects, antenna thrust, among different effects. However, it has been demonstrated that pure physical models fail to predict the real orbit behavior with sufficient accuracy. In this study we use a recently developed solar radiation pressure model based on the physical interaction between solar radiation and satellite, but also capable of fitting the GNSS tracking data, called adjustable box-wing model. Furthermore, Earth radiation pressure and antenna thrust are included as a priori acceleration. The adjustable parameters of the box-wing model are surface optical properties, the so-called Y-bias and a parameter capable of compensating for non-nominal orientation of the solar panels. Using the adjustable box-wing model a multi-year GPS/GLONASS solution has been computed, using a processing scheme derived from CODE (Center for Orbit Determination in Europe). This multi-year solution allows studying the long-term behavior of satellite orbits, box-wing parameters and geodetic parameters like station coordinates and geocenter. Moreover, the accuracy of GNSS orbits is assessed by using SLR data. This evaluation also allows testing, whether the current SLR-GPS bias could be further reduced.

  11. Nonextensive models for earthquakes

    NASA Astrophysics Data System (ADS)

    Silva, R.; França, G. S.; Vilar, C. S.; Alcaniz, J. S.

    2006-02-01

    We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment γ∝r3 . The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter q , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.

  12. Nonextensive models for earthquakes.

    PubMed

    Silva, R; França, G S; Vilar, C S; Alcaniz, J S

    2006-02-01

    We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude. PMID:16605393

  13. Two models for earthquake forerunners

    USGS Publications Warehouse

    Mjachkin, V.I.; Brace, W.F.; Sobolev, G.A.; Dieterich, J.H.

    1975-01-01

    Similar precursory phenomena have been observed before earthquakes in the United States, the Soviet Union, Japan, and China. Two quite different physical models are used to explain these phenomena. According to a model developed by US seismologists, the so-called dilatancy diffusion model, the earthquake occurs near maximum stress, following a period of dilatant crack expansion. Diffusion of water in and out of the dilatant volume is required to explain the recovery of seismic velocity before the earthquake. According to a model developed by Soviet scientists growth of cracks is also involved but diffusion of water in and out of the focal region is not required. With this model, the earthquake is assumed to occur during a period of falling stress and recovery of velocity here is due to crack closure as stress relaxes. In general, the dilatancy diffusion model gives a peaked precursor form, whereas the dry model gives a bay form, in which recovery is well under way before the earthquake. A number of field observations should help to distinguish between the two models: study of post-earthquake recovery, time variation of stress and pore pressure in the focal region, the occurrence of pre-existing faults, and any changes in direction of precursory phenomena during the anomalous period. ?? 1975 Birkha??user Verlag.

  14. The Modeling of Time-Varying Stream Water Age Distributions: Preliminary Investigations with Non-Conservative Solutes

    NASA Astrophysics Data System (ADS)

    Wilusz, D. C.; Harman, C. J.; Ball, W. P.

    2014-12-01

    Modeling the dynamics of chemical transport from the landscape to streams is necessary for water quality management. Previous work has shown that estimates of the distribution of water age in streams, the transit time distribution (TTD), can improve prediction of the concentration of conservative tracers (i.e., ones that "follow the water") based on upstream watershed inputs. A major challenge however has been accounting for climate and transport variability when estimating TDDs at the catchment scale. In this regard, Harman (2014, in review) proposed the Omega modeling framework capable of using watershed hydraulic fluxes to approximate the time-varying TTD. The approach was previously applied to the Plynlimon research watershed in Wales to simulate stream concentration dynamics of a conservative tracer (chloride) including 1/f attenuation of the power spectra density. In this study we explore the extent to which TTDs estimated by the Omega model vary with the concentration of non-conservative tracers (i.e., ones whose concentrations are also affected by transformations and interactions with other phases). First we test the hypothesis that the TTD calibrated in Plynlimon can explain a large part of the variation in non-conservative stream water constituents associated with storm flow (acidity, Al, DOC, Fe) and base flow (Ca, Si). While controlling for discharge, we show a correlation between the percentage of water of different ages and constituent concentration. Second, we test the hypothesis that TTDs help explain variation in stream nitrate concentration, which is of particular interest for pollution control but can be highly non-conservative. We compare simulation runs from Plynlimon and the agricultural Choptank watershed in Maryland, USA. Following a top-down approach, we estimate nitrate concentration as if it were a conservative tracer and examine the structure of residuals at different temporal resolutions. Finally, we consider model modifications to

  15. Bayesian kinematic earthquake source models

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  16. An improved GRACE monthly gravity field solution by modeling the non-conservative acceleration and attitude observation errors

    NASA Astrophysics Data System (ADS)

    Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze

    2016-02-01

    The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.

  17. An improved GRACE monthly gravity field solution by modeling the non-conservative acceleration and attitude observation errors

    NASA Astrophysics Data System (ADS)

    Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze

    2016-06-01

    The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.

  18. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  19. GEM - The Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  20. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  1. New geological perspectives on earthquake recurrence models

    SciTech Connect

    Schwartz, D.P.

    1997-02-01

    In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release.

  2. Asperity Model of an Earthquake - Dynamic Problem

    SciTech Connect

    Johnson, Lane R.; Nadeau, Robert M.

    2003-05-02

    We develop an earthquake asperity model that explains previously determined empirical scaling relationships for repeating earthquakes along the San Andreas fault in central California. The model assumes that motion on the fault is resisted primarily by a patch of small strong asperities that interact with each other to increase the amount of displacement needed to cause failure. This asperity patch is surrounded by a much weaker fault that continually creeps in response to tectonic stress. Extending outward from the asperity patch into the creeping part of the fault is a shadow region where a displacement deficit exists. Starting with these basic concepts, together with the analytical solution for the exterior crack problem, the consideration of incremental changes in the size of the asperity patch leads to differential equations that can be solved to yield a complete static model of an earthquake. Equations for scalar seismic moment, the radius of the asperity patch, and the radius of the displacement shadow are all specified as functions of the displacement deficit that has accumulated on the asperity patch. The model predicts that the repeat time for earthquakes should be proportional to the scalar moment to the 1/6 power, which is in agreement with empirical results for repeating earthquakes. The model has two free parameters, a critical slip distance dc and a scaled radius of a single asperity. Numerical values of 0.20 and 0.17 cm, respectively, for these two parameters will reproduce the empirical results, but this choice is not unique. Assuming that the asperity patches are distributed on the fault surface in a random fractal manner leads to a frequency size distribution of earthquakes that agrees with the Gutenberg Richter formula and a simple relationship between the b-value and the fractal dimension. We also show that the basic features of the theoretical model can be simulated with numerical calculations employing the boundary integral method.

  3. The Global Earthquake Model - Past, Present, Future

    NASA Astrophysics Data System (ADS)

    Smolka, Anselm; Schneider, John; Stein, Ross

    2014-05-01

    The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic

  4. Role of Bioindicators In Earthquake Modelling

    NASA Astrophysics Data System (ADS)

    Zelinsky, I. P.; Melkonyan, D. V.; Astrova, N. G.

    On the basis of experimental researches of influence of sound waves on bacteria- indicators a model of earthquake is constructed. It is revealed that the growth of num- ber of bacteria depends on frequency of a sound wave, influencing on the bacterium, (the less frequency of a sound wave, the faster takes place a growth). It is shown, that at absorption of energy of a sound wave by bacterium occurs growth of concentration of isopotential lines of biodynamic field in a bacterium. This process leads to the bac- terium braking and heating. By structure of deformation of lines of a biodynamic field it is possible to predict various geodynamic processes including earthquakes.

  5. On the earthquake predictability of fault interaction models

    PubMed Central

    Marzocchi, W; Melini, D

    2014-01-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643

  6. Aftershocks in a frictional earthquake model.

    PubMed

    Braun, O M; Tosatti, Erio

    2014-09-01

    Inspired by spring-block models, we elaborate a "minimal" physical model of earthquakes which reproduces two main empirical seismological laws, the Gutenberg-Richter law and the Omori aftershock law. Our point is to demonstrate that the simultaneous incorporation of aging of contacts in the sliding interface and of elasticity of the sliding plates constitutes the minimal ingredients to account for both laws within the same frictional model. PMID:25314453

  7. A simplified spring-block model of earthquakes

    SciTech Connect

    Brown, S.R. ); Rundle, J.B. ); Scholz C.H.

    1991-02-01

    The time interval between earthquakes is much larger than the actual time involved during slip in an individual event. The authors have used this fact to construct a cellular automaton model of earthquakes. This model describes the time evolution of a 2-D system of coupled masses and springs sliding on a frictional surface. The model exhibits power law frequency-size relations and can exhibit large earthquakes with the same scatter in the recurrence time observed for actual earthquakes.

  8. Extreme Earthquake Risk Estimation by Hybrid Modeling

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2012-12-01

    The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican

  9. Modeling coupled avulsion and earthquake timescale dynamics

    NASA Astrophysics Data System (ADS)

    Reitz, M. D.; Steckler, M. S.; Paola, C.; Seeber, L.

    2014-12-01

    River avulsions and earthquakes can be hazardous events, and many researchers work to better understand and predict their timescales. Improvements in the understanding of the intrinsic processes of deposition and strain accumulation that lead to these events have resulted in better constraints on the timescales of each process individually. There are however several mechanisms by which these two systems may plausibly become linked. River deposition and avulsion can affect the stress on underlying faults through differential loading by sediment or water. Conversely, earthquakes can affect river avulsion patterns through altering the topography. These interactions may alter the event recurrence timescales, but this dynamic has not yet been explored. We present results of a simple numerical model, in which two systems have intrinsic rates of approach to failure thresholds, but the state of one system contributes to the other's approach to failure through coupling functions. The model is first explored for the simplest case of two linear approaches to failure, and linearly proportional coupling terms. Intriguing coupling dynamics emerge: the system settles into cycles of repeating earthquake and avulsion timescales, which are approached at an exponential decay rate that depends on the coupling terms. The ratio of the number of events of each type and the timescale values also depend on the coupling coefficients and the threshold values. We then adapt the model to a more complex and realistic scenario, in which a river avulses between either side of a fault, with parameters corresponding to the Brahmaputra River / Dauki fault system in Bangladesh. Here the tectonic activity alters the topography by gradually subsiding during the interseismic time, and abruptly increasing during an earthquake. The river strengthens the fault by sediment loading when in one path, and weakens it when in the other. We show this coupling can significantly affect earthquake and avulsion

  10. Strong Earthquake Modelling in Cuban Territory

    NASA Astrophysics Data System (ADS)

    Moreno Toiran, B.; Alvarez Gomez, J.; Vaccari, F.

    2013-05-01

    A seismic hazard map for the Cuban territory was obtained by using waveforms modelling methods. The input data set consists of seismogenic zones, focal mechanisms, seismic wave velocity models and earthquake catalogue. Several maps were generated with the predominant periods corresponding to the maximum displacement (Dmax) and maximum velocity (Vmax) as well as the design ground acceleration (DGA). In order to get this result they were computed thousand of synthetic seismograms with the knowledge of the physical processes of earthquake generation, the levels of seismicity and wave propagation in anelastic media. The synthetic seismograms were generated at 1 Hz of frequency at a regular grid of 0.2 x 0.2 degree with the modal summation technique. Considering the strongest earthquake in the catalogue (7.3 magnitude Richter) the DGA maximum amplitudes are between 0.30g and 0.45g in the Santiago de Cuba region. If the maximum possible earthquake (8.0 magnitude Richter) is considered, the DGA can range between 0.6g and 0.9g in the same zone. For the first case (magnitude 7.3) the maximum velocities are between 60 - 84 cm/sec at periods between 1- 3 seconds and the maximum displacements are between 15 - 28 cm at periods between 4 - 5 seconds.

  11. Earthquake!

    ERIC Educational Resources Information Center

    Markle, Sandra

    1987-01-01

    A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)

  12. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...

  13. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  14. CP nonconservation in dynamically broken gauge theories

    SciTech Connect

    Lane, K.

    1981-01-01

    The recent proposal of Eichten, Lane, and Preskill for CP nonconservation in electroweak gauge theories with dynamical symmetry breaking is reviewed. Through the alignment of the vacuum with the explicit chiral symmetry breaking Hamiltonian, these theories provide a natural way to understand the dynamical origin of CP nonconservation. Special attention is paid to the problem of strong CP violation. Even through all vacuum angles are zero, this problem is not automatically avoided. In the absence of strong CP violation, the neutron electric dipole moment is expected to be 10/sup -24/-10/sup -26/ e-cm. A new class of models is proposed in which both strong CP violation and large /..delta..S/ = 2 effects may be avoided. In these models, /..delta..C/ = 2 processes such as D/sup o/ D/sup -o/ mixing may be large enough to observe.

  15. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  16. Human casualties in earthquakes: modelling and mitigation

    USGS Publications Warehouse

    Spence, R.J.S.; So, E.K.M.

    2011-01-01

    Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.

  17. Probabilistic earthquake location and 3-D velocity models in routine earthquake location

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Husen, S.

    2003-12-01

    Earthquake monitoring agencies, such as local networks or CTBTO, are faced with the dilemma of providing routine earthquake locations in near real-time with high precision and meaningful uncertainty information. Traditionally, routine earthquake locations are obtained from linearized inversion using layered seismic velocity models. This approach is fast and simple. However, uncertainties derived from a linear approximation to a set of non-linear equations can be imprecise, unreliable, or even misleading. In addition, 1-D velocity models are a poor approximation to real Earth structure in tectonically complex regions. In this paper, we discuss the routine location of earthquakes in near real-time with high precision using non-linear, probabilistic location methods and 3-D velocity models. The combination of non-linear, global search algorithms with probabilistic earthquake location provides a fast and reliable tool for earthquake location that can be used with any kind of velocity model. The probabilistic solution to the earthquake location includes a complete description of location uncertainties, which may be irregular and multimodal. We present applications of this approach to determine seismicity in Switzerland and in Yellowstone National Park, WY. Comparing our earthquake locations to earthquake locations obtained using linearized inversion and 1-D velocity models clearly demonstrates the advantages of probabilistic earthquake location and 3-D velocity models. For example, the more complete and reliable uncertainty information of non-linear, probabilistic earthquake location greatly facilitates the identification of poorly constrained hypocenters. Such events are often not identified in linearized earthquake location, since the location uncertainties are determined with a simplified, localized and approximate Gaussian statistic.

  18. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism. PMID:25768785

  19. Foreshock and Aftershocks in Simple Earthquake Models

    NASA Astrophysics Data System (ADS)

    Kazemian, J.; Tiampo, K. F.; Klein, W.; Dominguez, R.

    2015-02-01

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  20. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  1. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  2. ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1986-01-01

    A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.

  3. Slip complexity in earthquake fault models.

    PubMed Central

    Rice, J R; Ben-Zion, Y

    1996-01-01

    We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669

  4. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  5. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  6. Quasiperiodic Events in an Earthquake Model

    SciTech Connect

    Ramos, O.; Maaloey, K.J.; Altshuler, E.

    2006-03-10

    We introduce a modification of the Olami-Feder-Christensen earthquake model [Phys. Rev. Lett. 68, 1244 (1992)] in order to improve the resemblence with the Burridge-Knopoff mechanical model and with possible laboratory experiments. A constant and finite force continually drives the system, resulting in instantaneous relaxations. Dynamical disorder is added to the thresholds following a narrow distribution. We find quasiperiodic behavior in the avalanche time series with a period proportional to the degree of dissipation of the system. Periodicity is not as robust as criticality when the threshold force distribution widens, or when an increasing noise is introduced in the values of the dissipation.

  7. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  8. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  9. CP nonconservation without elementary scalar fields

    SciTech Connect

    Eichten, E.; Lane, K.; Preskill, J.

    1980-07-28

    Dynamically broken gauge theories of electroweak interactions provide a natural mechanism for generating CP nonconservation. Even if all vacuum angles are unobservable, strong CP nonconservation is not automatically avoided. In the absence of strong CP nonconservation, the neutron electric dipole moment is expected to be of the order 10/sup -24/ excm.

  10. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the

  11. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  12. Mechanical model of an earthquake fault

    NASA Technical Reports Server (NTRS)

    Carlson, J. M.; Langer, J. S.

    1989-01-01

    The dynamic behavior of a simple mechanical model of an earthquake fault is studied. This model, introduced originally by Burridge and Knopoff (1967), consists of an elastically coupled chain of masses in contact with a moving rough surface. The present version of the model retains the full Newtonian dynamics with inertial effects and contains no externally imposed stochasticity or spatial inhomogeneity. The only nonlinear feature is a velocity-weakening stick-slip friction force between the masses and the moving surface. This system is being driven persistently toward a slipping instability and therefore exhibits noisy sequences of earthquakelike events. These events are observed in numerical simulations, and many of their features can be predicted analytically.

  13. Conservative perturbation theory for nonconservative systems

    NASA Astrophysics Data System (ADS)

    Shah, Tirth; Chattopadhyay, Rohitashwa; Vaidya, Kedar; Chakraborty, Sagar

    2015-12-01

    In this paper, we show how to use canonical perturbation theory for dissipative dynamical systems capable of showing limit-cycle oscillations. Thus, our work surmounts the hitherto perceived barrier for canonical perturbation theory that it can be applied only to a class of conservative systems, viz., Hamiltonian systems. In the process, we also find Hamiltonian structure for an important subset of Liénard system—a paradigmatic system for modeling isolated and asymptotic oscillatory state. We discuss the possibility of extending our method to encompass an even wider range of nonconservative systems.

  14. Strain-softening instability model for the san fernando earthquake

    USGS Publications Warehouse

    Stuart, W.D.

    1979-01-01

    Changes in the ground elevation observed before and immediately after the 1971 San Fernando, California, earthquake are consistent with a theoretical model in which fault zone rocks are strain-softening after peak stress. The model implies that the slip rate of the fault increased to about 0.1 meter per year near the focus before the earthquake.

  15. Classical mechanics of nonconservative systems.

    PubMed

    Galley, Chad R

    2013-04-26

    Hamilton's principle of stationary action lies at the foundation of theoretical physics and is applied in many other disciplines from pure mathematics to economics. Despite its utility, Hamilton's principle has a subtle pitfall that often goes unnoticed in physics: it is formulated as a boundary value problem in time but is used to derive equations of motion that are solved with initial data. This subtlety can have undesirable effects. I present a formulation of Hamilton's principle that is compatible with initial value problems. Remarkably, this leads to a natural formulation for the Lagrangian and Hamiltonian dynamics of generic nonconservative systems, thereby filling a long-standing gap in classical mechanics. Thus, dissipative effects, for example, can be studied with new tools that may have applications in a variety of disciplines. The new formalism is demonstrated by two examples of nonconservative systems: an object moving in a fluid with viscous drag forces and a harmonic oscillator coupled to a dissipative environment. PMID:23679733

  16. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may

  17. Physically-based modelling of the competition between surface uplift and erosion caused by earthquakes and earthquake sequences.

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2016-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a

  18. Water resources planning for rivers draining into Mobile Bay. Part 2: Non-conservative species transport models

    NASA Technical Reports Server (NTRS)

    April, G. C.; Liu, H. A.

    1975-01-01

    Total coliform group bacteria were selected to expand the mathematical modeling capabilities of the hydrodynamic and salinity models to understand their relationship to commercial fishing ventures within bay waters and to gain a clear insight into the effect that rivers draining into the bay have on water quality conditions. Parametric observations revealed that temperature factors and river flow rate have a pronounced effect on the concentration profiles, while wind conditions showed only slight effects. An examination of coliform group loading concentrations at constant river flow rates and temperature shows these loading changes have an appreciable influence on total coliform distribution within Mobile Bay.

  19. Global Well-Posedness and Decay Rates of Strong Solutions to a Non-Conservative Compressible Two-Fluid Model

    NASA Astrophysics Data System (ADS)

    Evje, Steinar; Wang, Wenjun; Wen, Huanyao

    2016-09-01

    In this paper, we consider a compressible two-fluid model with constant viscosity coefficients and unequal pressure functions {P^+neq P^-}. As mentioned in the seminal work by Bresch, Desjardins, et al. (Arch Rational Mech Anal 196:599-629, 2010) for the compressible two-fluid model, where {P^+=P^-} (common pressure) is used and capillarity effects are accounted for in terms of a third-order derivative of density, the case of constant viscosity coefficients cannot be handled in their settings. Besides, their analysis relies on a special choice for the density-dependent viscosity [refer also to another reference (Commun Math Phys 309:737-755, 2012) by Bresch, Huang and Li for a study of the same model in one dimension but without capillarity effects]. In this work, we obtain the global solution and its optimal decay rate (in time) with constant viscosity coefficients and some smallness assumptions. In particular, capillary pressure is taken into account in the sense that {Δ P=P^+ - P^-=fneq 0} where the difference function {f} is assumed to be a strictly decreasing function near the equilibrium relative to the fluid corresponding to {P^-}. This assumption plays an key role in the analysis and appears to have an essential stabilization effect on the model in question.

  20. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  1. Radiation reaction as a non-conservative force

    NASA Astrophysics Data System (ADS)

    Aashish, Sandeep; Haque, Asrarul

    2016-09-01

    We study a system of a finite size charged particle interacting with a radiation field by exploiting Hamilton’s principle for a non-conservative system recently introduced by Galley [1]. This formulation leads to the equation of motion of the charged particle that turns out to be the same as that obtained by Jackson [2]. We show that the radiation reaction stems from the non-conservative part of the effective action for a charged particle. We notice that a charge interacting with a radiation field modeled as a heat bath affords a way to justify that the radiation reaction is a non-conservative force. The topic is suitable for graduate courses on advanced electrodynamics and classical theory of fields.

  2. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed Central

    Tullis, T E

    1996-01-01

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668

  3. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed

    Tullis, T E

    1996-04-30

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. PMID:11607668

  4. Parity nonconservation in hydrogen.

    SciTech Connect

    Dunford, R. W.; Holt, R. J.

    2011-01-01

    We discuss the prospects for parity violation experiments in atomic hydrogen and deuterium to contribute to testing the Standard Model (SM). We find that, if parity experiments in hydrogen can be done, they remain highly desirable because there is negligible atomic-physics uncertainty and low energy tests of weak neutral current interactions are needed to probe for new physics beyond the SM. Analysis of a generic APV experiment in deuterium indicates that a 0.3% measurement of C{sub 1D} requires development of a slow (77K) metastable beam of {approx} 5 x 10{sup 14}D(2S)s{sup -1} per hyperfine component. The advent of UV radiation from free electron laser (FEL) technology could allow production of such a beam.

  5. Modeling earthquake ground motion with an earthquake simulation program (EMPSYN) that utilizes empirical Green's functions

    SciTech Connect

    Hutchings, L.

    1992-01-01

    This report outlines a method of using empirical Green's functions in an earthquake simulation program EMPSYN that provides realistic seismograms from potential earthquakes. The theory for using empirical Green's functions is developed, implementation of the theory in EMPSYN is outlined, and an example is presented where EMPSYN is used to synthesize observed records from the 1971 San Fernando earthquake. To provide useful synthetic ground motion data from potential earthquakes, synthetic seismograms should model frequencies from 0.5 to 15.0 Hz, the full wave-train energy distribution, and absolute amplitudes. However, high-frequency arrivals are stochastically dependent upon the inhomogeneous geologic structure and irregular fault rupture. The fault rupture can be modeled, but the stochastic nature of faulting is largely an unknown factor in the earthquake process. The effect of inhomogeneous geology can readily be incorporated into synthetic seismograms by using small earthquakes to obtain empirical Green's functions. Small earthquakes with source corner frequencies higher than the site recording limit f{sub max}, or much higher than the frequency of interest, effectively have impulsive point-fault dislocation sources, and their recordings are used as empirical Green's functions. Since empirical Green's functions are actual recordings at a site, they include the effects on seismic waves from all geologic inhomogeneities and include all recordable frequencies, absolute amplitudes, and all phases. They scale only in amplitude with differences in seismic moment. They can provide nearly the exact integrand to the representation relation. Furthermore, since their source events have spatial extent, they can be summed to simulate fault rupture without loss of information, thereby potentially computing the exact representation relation for an extended source earthquake.

  6. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  7. FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.

    USGS Publications Warehouse

    Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.

    1985-01-01

    The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.

  8. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  9. An earthquake model with interacting asperities

    NASA Astrophysics Data System (ADS)

    Johnson, Lane R.

    2010-09-01

    A model is presented that treats an earthquake as the failure of asperities in a manner consistent with modern concepts of sliding friction. The mathematical description of the model includes results for elliptical and circular asperities, oblique tectonic slip, static and dynamic solutions for slip on the fault, stress intensity factors, strain energy and second-order moment tensor. The equations that control interaction of asperities are derived and solved both in a quasi-static tectonic mode when none of the asperities are in the process of failing and a dynamic failure mode when asperities are failing and sending out slip pulses that can trigger failure of additional asperities. The model produces moment rate functions for each asperity failure so that, given an appropriate Green function, the radiation of elastic waves is a straightforward calculation. The model explains an observed scaling relationship between repeat time and seismic moment for repeating seismic events and is consistent with the properties of pseudo-tachylites treated as fossil asperities. Properties of the model are explored with simulations of seismic activity that results when a section of the fault containing a spatial distribution of asperities is subjected to tectonic slip. The simulations show that the failure of a group of strongly interacting asperities satisfies the same scaling relationship as the failure of individual asperities, and that realistic distributions of asperities on a fault plane lead to seismic activity consistent with probability estimates for the interaction of asperities and predicted values of the Gutenberg-Richter a and b values. General features of the model are the exterior crack solution as a theoretical foundation, a heterogeneous state of stress and strength on the fault, dynamic effects controlled by propagating slip pulses and radiated elastic waves with a broad frequency band.

  10. Strain waves, earthquakes, slow earthquakes, and afterslip in the framework of the Frenkel-Kontorova model.

    PubMed

    Gershenzon, N I; Bykov, V G; Bambakidis, G

    2009-05-01

    The one-dimensional Frenkel-Kontorova (FK) model, well known from the theory of dislocations in crystal materials, is applied to the simulation of the process of nonelastic stress propagation along transform faults. Dynamic parameters of plate boundary earthquakes as well as slow earthquakes and afterslip are quantitatively described, including propagation velocity along the strike, plate boundary velocity during and after the strike, stress drop, displacement, extent of the rupture zone, and spatiotemporal distribution of stress and strain. The three fundamental speeds of plate movement, earthquake migration, and seismic waves are shown to be connected in framework of the continuum FK model. The magnitude of the strain wave velocity is a strong (almost exponential) function of accumulated stress or strain. It changes from a few km/s during earthquakes to a few dozen km per day, month, or year during afterslip and interearthquake periods. Results of the earthquake parameter calculation based on real data are in reasonable agreement with measured values. The distributions of aftershocks in this model are consistent with the Omori law for temporal distribution and a 1/r for the spatial distributions. PMID:19518576

  11. Earthquake research: Premonitory models and the physics of crustal distortion

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. H.

    1981-01-01

    Seismic, gravity, and electrical resistivity data, believed to be most relevent to development of earthquake premonitory models of the crust, are presented. Magnetotellurics (MT) are discussed. Radon investigations are reviewed.

  12. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes

  13. The Nonconservation of Potential Vorticity by a Dynamical Core

    NASA Astrophysics Data System (ADS)

    Saffin, Leo; Methven, John; Gray, Sue

    2016-04-01

    Numerical models of the atmosphere combine a dynamical core, which approximates solutions to the adiabatic, frictionless governing equations for fluid dynamics, with tendencies arising from the parametrization of other physical processes. Since potential vorticity (PV) is conserved following fluid flow in adiabatic, frictionless circumstances, it is possible to isolate the effects of non-conservative processes by accumulating PV changes in an air-mass relative framework. This ``PV tracer technique'' is used to accumulate separately the effects on PV of each of the different non-conservative processes represented in a numerical model of the atmosphere. Dynamical cores are not exactly conservative because they introduce, explicitly or implicitly, some level of dissipation and adjustment of prognostic model variables which acts to modify PV. Here, the PV tracers technique is extended to diagnose the cumulative effect of the non-conservation of PV by a dynamical core and its characteristics relative to the PV modification by parametrized physical processes. Quantification using the Met Office Unified Model reveals that the magnitude of the non-conservation of PV by the dynamical core is comparable to those from physical processes. Moreover, the residual of the PV budget, when tracing the effects of the dynamical core and physical processes, is at least an order of magnitude smaller than the PV tracers associated with the most active physical processes. The implication of this work is that the non-conservation of PV by a dynamical core can be assessed in case studies with a full suite of physics parametrizations and directly compared with the PV modification by parametrized physical processes.

  14. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  15. Scaling and Nucleation in Models of Earthquake Faults

    SciTech Connect

    Klein, W.; Ferguson, C.; Rundle, J.

    1997-05-01

    We present an analysis of a slider block model of an earthquake fault which indicates the presence of metastable states ending in spinodals. We identify four parameters whose values determine the size and statistical distribution of the {open_quotes}earthquake{close_quotes} events. For values of these parameters consistent with real faults we obtain scaling of events associated not with critical point fluctuations but with the presence of nucleation events. {copyright} {ital 1997} {ital The American Physical Society}

  16. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Razafindrakoto, Hoby N. T.; Mai, P. Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran K. S.

    2015-07-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  17. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  18. Analysing earthquake slip models with the spatial prediction comparison test

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Mai, P. Martin; Thingbaijam, Kiran K. S.; Razafindrakoto, Hoby N. T.; Genton, Marc G.

    2015-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (`model') and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  19. Retrospective tests of hybrid operational earthquake forecasting models for Canterbury

    NASA Astrophysics Data System (ADS)

    Rhoades, D. A.; Liukis, M.; Christophersen, A.; Gerstenberger, M. C.

    2016-01-01

    The Canterbury, New Zealand, earthquake sequence, which began in September 2010, occurred in a region of low crustal deformation and previously low seismicity. Because, the ensuing seismicity in the region is likely to remain above previous levels for many years, a hybrid operational earthquake forecasting model for Canterbury was developed to inform decisions on building standards and urban planning for the rebuilding of Christchurch. The model estimates occurrence probabilities for magnitudes M ≥ 5.0 in the Canterbury region for each of the next 50 yr. It combines two short-term, two medium-term and four long-term forecasting models. The weight accorded to each individual model in the operational hybrid was determined by an expert elicitation process. A retrospective test of the operational hybrid model and of an earlier informally developed hybrid model in the whole New Zealand region has been carried out. The individual and hybrid models were installed in the New Zealand Earthquake Forecast Testing Centre and used to make retrospective annual forecasts of earthquakes with magnitude M > 4.95 from 1986 on, for time-lags up to 25 yr. All models underpredict the number of earthquakes due to an abnormally large number of earthquakes in the testing period since 2008 compared to those in the learning period. However, the operational hybrid model is more informative than any of the individual time-varying models for nearly all time-lags. Its information gain relative to a reference model of least information decreases as the time-lag increases to become zero at a time-lag of about 20 yr. An optimal hybrid model with the same mathematical form as the operational hybrid model was computed for each time-lag from the 26-yr test period. The time-varying component of the optimal hybrid is dominated by the medium-term models for time-lags up to 12 yr and has hardly any impact on the optimal hybrid model for greater time-lags. The optimal hybrid model is considerably more

  20. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331

  1. Modeling the behavior of an earthquake base-isolated building.

    SciTech Connect

    Coveney, V. A.; Jamil, S.; Johnson, D. E.; Kulak, R. F.; Uras, R. A.

    1997-11-26

    Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.

  2. Parity nonconservation in atomic Zeeman transitions

    SciTech Connect

    Angstmann, E. J.; Dinh, T. H.; Flambaum, V. V.

    2005-11-15

    We discuss the possibility of measuring nuclear anapole moments in atomic Zeeman transitions and perform the necessary calculations. Advantages of using Zeeman transitions include variable transition frequencies and the possibility of enhancement of parity nonconservation effects.

  3. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  4. Assessing a 3D smoothed seismicity model of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan

    2016-04-01

    As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.

  5. Transient Response of Seismicity and Earthquake Probabilities to Stress Transfer in a Brownian Earthquake Model

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.; Matthews, M. V.; Simpson, R. W.

    2001-12-01

    A statistical mechanical description of elastic rebound is used to study earthquake interaction and stress transfer effects in a point process model of earthquakes. The model is a Brownian Relaxation Oscillator (BRO) in which a random walk (standard Brownian motion) is added to a steady tectonic loading to produce a stochastic load state process. Rupture occurs in this model when the load state reaches a critical value. The load state is a random variable and may be described at any point in time by its probability density. Load state evolves toward the failure threshold due to tectonic loading (drift), and diffuses due to Brownian motion (noise) according to a diffusion equation. The Brownian perturbation process formally represents the sum total of all factors, aside from tectonic loading, that govern rupture. Physically, these factors may include effects of earthquakes external to the source, aseismic loading, interaction effects within the source itself, healing, pore pressure evolution, etc. After a sufficiently long time, load state always evolves to a steady state probability density that is independent of the initial condition and completely described by the drift rate and noise scale. Earthquake interaction and stress transfer effects are modeled by an instantaneous change in the load state. A negative step reduces the probability of failure, while a positive step may either immediately trigger rupture or increase the failure probability (hazard). When the load state is far from failure, the effects are well-approximated by ``clock advances'' that shift the unperturbed hazard down or up, as appropriate for the sign of the step. However, when the load state is advanced in the earthquake cycle, the response is a sharp, temporally localized decrease or increase in hazard. Recovery of the hazard is characteristically ``Omori like'' ( ~ 1/t), which can be understood in terms of equilibrium thermodynamical considerations since state evolution is diffusion with

  6. A Godunov scheme for solving hyperbolic systems in a nonconservative form

    NASA Astrophysics Data System (ADS)

    Zalzali, I.; Abbas, H.

    2005-05-01

    In this paper, we developed a Godunov scheme for solving nonconservative systems. The main idea of this method is a new type of projection which illustrated the essential role of the numerical viscosity to determine the solution with shocks for system in a nonconservative form. We apply our study to a system modeling elasticity and we observe a complete agreement between the theory and the numerical results.

  7. Earthquake nucleation mechanisms and periodic loading: Models, Experiments, and Observations

    NASA Astrophysics Data System (ADS)

    Dahmen, K.; Brinkman, B.; Tsekenis, G.; Ben-Zion, Y.; Uhl, J.

    2010-12-01

    The project has two main goals: (a) Improve the understanding of how earthquakes are nucleated ¬ with specific focus on seismic response to periodic stresses (such as tidal or seasonal variations) (b) Use the results of (a) to infer on the possible existence of precursory activity before large earthquakes. A number of mechanisms have been proposed for the nucleation of earthquakes, including frictional nucleation (Dieterich 1987) and fracture (Lockner 1999, Beeler 2003). We study the relation between the observed rates of triggered seismicity, the period and amplitude of cyclic loadings and whether the observed seismic activity in response to periodic stresses can be used to identify the correct nucleation mechanism (or combination of mechanisms). A generalized version of the Ben-Zion and Rice model for disordered fault zones and results from related recent studies on dislocation dynamics and magnetization avalanches in slowly magnetized materials are used in the analysis (Ben-Zion et al. 2010; Dahmen et al. 2009). The analysis makes predictions for the statistics of macroscopic failure events of sheared materials in the presence of added cyclic loading, as a function of the period, amplitude, and noise in the system. The employed tools include analytical methods from statistical physics, the theory of phase transitions, and numerical simulations. The results will be compared to laboratory experiments and observations. References: Beeler, N.M., D.A. Lockner (2003). Why earthquakes correlate weakly with the solid Earth tides: effects of periodic stress on the rate and probability of earthquake occurrence. J. Geophys. Res.-Solid Earth 108, 2391-2407. Ben-Zion, Y. (2008). Collective Behavior of Earthquakes and Faults: Continuum-Discrete Transitions, Evolutionary Changes and Corresponding Dynamic Regimes, Rev. Geophysics, 46, RG4006, doi:10.1029/2008RG000260. Ben-Zion, Y., Dahmen, K. A. and J. T. Uhl (2010). A unifying phase diagram for the dynamics of sheared solids

  8. Numerical modeling of shallow fault creep triggered by nearby earthquakes

    NASA Astrophysics Data System (ADS)

    Wei, M.; Liu, Y.; McGuire, J. J.

    2011-12-01

    The 2010 El Mayor-Cucapha Mw 7.2 earthquake is the largest earthquake that strikes southern California in the last 18 years. It has triggered shallow fault creep on many faults in Salton Trough, Southern California, making it at least the 8th time in the last 42 years that a local or regional earthquake has done so. However, the triggering mechanism of fault creep and its implications to seismic hazard and fault mechanics is still poorly understood. For example, what determines the relative importance of static triggering and dynamic triggering of fault creep? What can we learn about the local frictional properties and normal stress from the triggering of fault creep? To understand the triggering mechanism and constrain fault frictional properties, we simulate the triggered fault creep on the Superstition Hills Fault (SHF), Salton Trough, Southern California. We use realistic static and dynamic shaking due to nearby earthquakes as stress perturbations to a 2D (in a 3D medium) planar fault model with rate-and-state frictional property variations both in depth and along strike. Unlike many previous studies, we focus on the simulation of triggered shallow fault creep instead of earthquakes. Our fault model can reproduce the triggering process, by static, dynamic , and combined stress perturbation. Preliminary results show that the magnitude of perturbation relative to the original stress level is an important parameter. In the static case, perturbation of 1% of normal stress trigger delayed fault creep whereas 10% of normal stress generate instantaneous creep. In the dynamic case, a change of two times in magnitude of perturbation can result in difference of triggered creep in several orders of magnitude. We explore combined triggering with different ratio of static and dynamic perturbation. The timing of triggering in a earthquake cycle is also important. With measurements on triggered creep on the SHF, we constrain local stress level and frictional parameters, which

  9. Dynamic models of an earthquake and tsunami offshore Ventura, California

    USGS Publications Warehouse

    Kenny J. Ryan; Geist, Eric L.; Barall, Michael; David D. Oglesby

    2015-01-01

    The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.

  10. Slimplectic Integrators: Variational Integrators for Nonconservative systems

    NASA Astrophysics Data System (ADS)

    Tsang, David

    2016-05-01

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. Here we present the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to a newly developed principle of stationary nonconservative action (Galley, 2013, Galley et al 2014). As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.

  11. Slimplectic Integrators: Variational Integrators for Nonconservative systems

    NASA Astrophysics Data System (ADS)

    Tsang, David

    2016-01-01

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the "slimplectic" integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting-Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.

  12. Stochastic-Dynamic Earthquake Models and Tsunami Generation

    NASA Astrophysics Data System (ADS)

    Oglesby, D. D.; Geist, E. L.

    2013-12-01

    Dynamic models are now understood to provide physically plausible faulting scenarios for ground motion prediction, but their use in tsunami hazard analysis is in its infancy. Typical tsunami model generation methods rely on kinematic or dislocation models of the earthquake source, in which the seismic moment, rupture path, and slip distribution are assumed a priori, typically based on models of prior earthquakes, aftershock distributions, and/or some sort of stochastic slip model. However, such models are not guaranteed to be consistent with any physically plausible faulting scenario and may span a range of parameter space far outside of what is physically realistic. In contrast, in dynamic models the earthquake rupture and slip process (including the final size of the earthquake, the spatiotemporal evolution of slip, and the rupture path on complex fault geometry) are calculated results of the models. Utilizing the finite element method, a self-affine stochastic stress field, and a shallow-water hydrodynamic code, we calculate a suite of dynamic slip models and near-source tsunamis from a megathrust/splay fault system motivated by the geometry in the Nankai region of Japan. Different stress realizations produce different spatial patterns of slip, including different partitioning between the megathrust and splay segments. Because the final moment from different stress realizations can differ, and because partitioning of slip between fault segments has a first-order effect on the surface deformation and tsunami generation, the modeled near-source tsunamis are also highly variable. Models whose stress amplitudes have been scaled to produce equivalent seismic moments (but with the same spatial variability and relative fault strength as the previous unscaled models) have less variability in tsunami amplitude in regions far from the fault, but greater variability in amplitude in the near-fault region.

  13. Hybrid Modelling of the Economical Consequences of Extreme Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2013-05-01

    A hybrid modelling methodology is proposed to estimate the probability of exceedance of the intensities of extreme magnitude earthquakes (PEI) and of their direct economical consequences (PEDEC). The hybrid modeling uses 3D seismic wave propagation (3DWP) combined with empirical Green function (EGF) and Neural Network (NN) techniques in order to estimate the seismic hazard (PEIs) of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. The methodology is validated for Mw 8 magnitude subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican Pacific Coast. The results obtained with the proposed methodology, such as those of the PEDECs in terms of the joint event "damage Cost (C) - maximum ground intensities", of the conditional return period of C given that the maximum intensity exceeds a certain value, could be used by decision makers to allocate funds or to implement policies, to mitigate the impact associated to the plausible occurrence of future extreme magnitude earthquakes.

  14. Numerical Modeling and Forecasting of Strong Sumatra Earthquakes

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Yin, C.

    2007-12-01

    ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.

  15. Prediction model of earthquake with the identification of earthquake source polarity mechanism through the focal classification using ANFIS and PCA technique

    NASA Astrophysics Data System (ADS)

    Setyonegoro, W.

    2016-05-01

    Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.

  16. Non-conservative mass transfers in Algols

    NASA Astrophysics Data System (ADS)

    Erdem, A.; Öztürk, O.

    2014-06-01

    We applied a revised model for non-conservative mass transfer in semi-detached binaries to 18 Algol-type binaries showing orbital period increase or decrease in their parabolic O-C diagrams. The combined effect of mass transfer and magnetic braking due to stellar wind was considered when interpreting the orbital period changes of these 18 Algols. Mass transfer was found to be the dominant mechanism for the increase in orbital period of 10 Algols (AM Aur, RX Cas, DK Peg, RV Per, WX Sgr, RZ Sct, BS Sct, W Ser, BD Vir, XZ Vul) while magnetic braking appears to be the responsible mechanism for the decrease in that of 8 Algols (FK Aql, S Cnc, RU Cnc, TU Cnc, SX Cas, TW Cas, V548 Cyg, RY Gem). The peculiar behaviour of orbital period changes in three W Ser-type binary systems (W Ser, itself a prototype, RX Cas and SX Cas) is discussed. The empirical linear relation between orbital period (P) and its rate of change (dP/dt) was also revised.

  17. Combined GPS and InSAR models of postseismic deformation from the Northridge Earthquake

    NASA Technical Reports Server (NTRS)

    Donnellan, A.; Parker, J. W.; Peltzer, G.

    2002-01-01

    Models of combined Global Positioning System and Interferometric Synthetic Aperture Radar data collected in the region of the Northridge earthquake indicate that significant afterslip on the main fault occurred following the earthquake.

  18. Understanding earthquake source processes with spatial random field models

    NASA Astrophysics Data System (ADS)

    Song, S.

    2011-12-01

    Earthquake rupture is a complex mechanical process that can be formulated as a dynamically running shear crack on a frictional interface embedded in an elastic continuum. This type of dynamic description of earthquake rupture is often preferred among researchers because they believe the kinematic description is likely to miss physical constraints introduced by dynamic approaches and to lead to arbitrary and nonphysical kinematic fault motions. However, dynamic rupture modeling, although they produce physically consistent models, often uses arbitrary input parameters, e.g., stress and fracture energy, partially because they are more difficult to constrain with data compared to kinematic ones. I propose to describe earthquake rupture as a stochastic model with a set of random variables (e.g., random field) that represent the spatial distribution of kinematic source parameters such as slip, rupture velocity, slip duration and velocity. This is a kinematic description of earthquake rupture in the sense that a model is formulated with kinematic parameters, but since the model can be constrained by both rupture dynamics and data, it may have both physical and observational constraints inside. The stochastic model is formulated by quantifying the 1-point and 2-point statistics of the kinematic parameters. 1-point statistics define a marginal probability density function for a certain source parameter at a given point on a fault. For example, a probability distribution for earthquake slip at a given point can control a possible range of values taken by earthquake slip and their likelihood. In the same way, we can control the existence of supershear rupture with a 1-point variability of the rupture velocity. Two point statistics, i.e. auto- and cross-coherence between source parameters, control the heterogeneity of each source parameter and their coupling, respectively. Several interesting features of earthquake rupture have been found by investigating cross

  19. Meeting the Challenge of Earthquake Risk Globalisation: Towards the Global Earthquake Model GEM (Sergey Soloviev Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Zschau, J.

    2009-04-01

    Earthquake risk, like natural risks in general, has become a highly dynamic and globally interdependent phenomenon. Due to the "urban explosion" in the Third World, an increasingly complex cross linking of critical infrastructure and lifelines in the industrial nations and a growing globalisation of the world's economies, we are presently facing a dramatic increase of our society's vulnerability to earthquakes in practically all seismic regions on our globe. Such fast and global changes cannot be captured with conventional earthquake risk models anymore. The sciences in this field are, therefore, asked to come up with new solutions that are no longer exclusively aiming at the best possible quantification of the present risks but also keep an eye on their changes with time and allow to project these into the future. This does not apply to the vulnerablity component of earthquake risk alone, but also to its hazard component which has been realized to be time-dependent, too. The challenges of earthquake risk dynamics and -globalisation have recently been accepted by the Global Science Forum of the Organisation for Economic Co-operation and Development (OECD - GSF) who initiated the "Global Earthquake Model (GEM)", a public-private partnership for establishing an independent standard to calculate, monitor and communicate earthquake risk globally, raise awareness and promote mitigation.

  20. Physical model for earthquakes, 1. Fluctuations and interactions

    SciTech Connect

    Rundle, J.B.

    1988-06-10

    This is the first of a series of papers whose purpose is to develop the apparatus needed to understand the problem of earthquake occurrence in a more physical context than has often been the case. To begin, it is necessary to introduce the idea that earthquakes represent a fluctuation about the long-term motion of the plates. This idea is made mathematically explicit by the introduction of a concept called the fluctuation hypothesis. Under this hypothesis, all physical quantities which pertain to the occurrence of earthquakes are required to depend on a physically meaningful quantity called the offset phase, the difference between the present state of slip on the fault and its long-term average. For the mathematical treatment of the fluctuation problem it is most convenient to introduce a spatial averaging, or ''coarse-graining'' operation, dividing the fault plane into a lattice of N patches. In this way, integrals are replaced by sums, and differential equations are replaced by algebraic equations. As a result of these operations the physics of earthquake occurrence can be stated in terms of a physically meaningful energy functional: an ''external potential'' W/sub E/. W/sub E/ is a functional potential for the stress on the fault plane acting from the external medium and characterizes the energy contained within the medium external to the fault plane which is available to produce earthquakes. A simple example is discussed which involves the dynamics of a one-dimensional fault model. To gain some understanding, a simple friction law and a failure algorithm are assumed. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state.

  1. Physical model for earthquakes, 2. Application to southern California

    SciTech Connect

    Rundle, J.B.

    1988-06-10

    The purpose of this paper is to apply ideas developed in a previous paper to the construction of a detailed model for earthquake dynamics in southern California. The basis upon which the approach is formulated is that earthquakes are perturbations on, or more specifically fluctuations about, the long-term motions of the plates. This concept is made mathematically precise by means of a ''fluctuation hypothesis,'' which states that all physical quantities associated with earthquakes can be expressed as integral expansions in a fluctuating quantity called the ''offset phase.'' While in general, the frictional stick-slip properties of the complex, interacting faults should properly come out of the underlying physics, a simplification is made here, and a simple, spatially varying friction law is assumed. Together with the complex geometry of the major active faults, an assumed, spatially varying Earth rheology, the average rates of long-term offsets on all the major faults, and the friction coefficients, one can generate synthetic earthquake histories for comparison to the real data.

  2. Ionosphere TEC disturbances before strong earthquakes: observations, physics, modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Namgaladze, A. A.

    2013-12-01

    The phenomenon of the pre-earthquake ionospheric disturbances is discussed. A number of typical TEC (Total Electron Content) relative disturbances is presented for several recent strong earthquakes occurred in different ionospheric conditions. Stable typical TEC deviations from quiet background state are observed few days before the strong seismic events in the vicinity of the earthquake epicenter and treated as ionospheric earthquake precursors. They don't move away from the source in contrast to the disturbances related with geomagnetic activity. Sunlit ionosphere approach leads to reduction of the disturbances up to their full disappearance, and effects regenerate at night. The TEC disturbances often observed in the magnetically conjugated areas as well. At low latitudes they accompany with equatorial anomaly modifications. The hypothesis about the electromagnetic channel of the pre-earthquake ionospheric disturbances' creation is discussed. The lithosphere and ionosphere are coupled by the vertical external electric currents as a result of ionization of the near-Earth air layer and vertical transport of the charged particles through the atmosphere over the fault. The external electric current densities exceeding the regular fair-weather electric currents by several orders are required to produce stable long-living seismogenic electric fields such as observed by onboard measurements of the 'Intercosmos-Bulgaria 1300' satellite over the seismic active zones. The numerical calculation results using the Upper Atmosphere Model demonstrate the ability of the external electric currents with the densities of 10-8-10-9 A/m2 to produce such electric fields. The sumulations reproduce the basic features of typical pre-earthquake TEC relative disturbances. It is shown that the plasma ExB drift under the action of the seismogenic electric field leads to the changes of the F2 region electron number density and TEC. The upward drift velocity component enhances NmF2 and TEC and

  3. Desk-top model buildings for dynamic earthquake response demonstrations

    USGS Publications Warehouse

    Brady, A. Gerald

    1992-01-01

    Models of buildings that illustrate dynamic resonance behavior when excited by hand are designed and built. Two types of buildings are considered, one with columns stronger than floors, the other with columns weaker than floors. Combinations and variations of these two types are possible. Floor masses and column stiffnesses are chosen in order that the frequency of the second mode is approximately five cycles per second, so that first and second modes can be excited manually. The models are expected to be resonated by hand by schoolchildren or persons unfamiliar with the dynamic resonant response of tall buildings, to gain an understanding of structural behavior during earthquakes. Among other things, this experience will develop a level of confidence in the builder and experimenter should they be in a high-rise building during an earthquake, sensing both these resonances and other violent shaking.

  4. Punctuated-equilibrium model of biological evolution is also a self-organized-criticality model of earthquakes

    NASA Astrophysics Data System (ADS)

    Ito, Keisuke

    1995-09-01

    Bak and Sneppen proposed a self-organized-criticality model to explain the punctuated equilibrium of biological evolution. The model, as it is, is a good self-organized-criticality model of earthquakes. Real earthquakes satisfy the required conditions of criticality; that is, power laws in (1) the size distribution of earthquakes, and (2) both the spatial and the temporal correlation functions.

  5. The Common Forces: Conservative or Nonconservative?

    ERIC Educational Resources Information Center

    Keeports, David

    2006-01-01

    Of the forces commonly encountered when solving problems in Newtonian mechanics, introductory texts usually limit illustrations of the definitions of conservative and nonconservative forces to gravity, spring forces, kinetic friction and fluid resistance. However, at the expense of very little class time, the question of whether each of the common…

  6. Parity nonconservation in the hydrogen atom

    SciTech Connect

    Chupp, T.E.

    1983-01-01

    The development of experiments to detect parity nonconserving (PNC) mixing of the 2s/sub 1///sub 2/ and 2p/sub 1///sub 2/ levels of the hydrogen atom in a 570 Gauss magnetic field is described. The technique involves observation of an asymmetry in the rate of microwave induced transitions at 1608 MHz due to the interference of two amplitudes, one produced by applied microwave and static electric fields and the other produced by an applied microwave field and the 2s/sub 1///sub 2/ - 2p/sub 1///sub 2/ mixing induced by a PNC Hamiltonian. These investigations, underway since 1977, have led to an experiment in which the two amplitudes are produced in two independently phased microwave cavities. The apparatus has the great advantage that all applied fields are cylindrically symmetric, thus false PNC effects can be generated only by departures from cylindrical symmetry which enter as the product of two small misalignment angles. The apparatus also has great diagnostic power since the sectioned microwave cavities can be used to produce static electric fields over short, well localized regions of space. This permits alignment of the apparatus and provides a sensitive probe of cylindrical symmetry. A phase regulation loop greatly reduces phase noise due to instabilities of the magnetic field, microwave generators, and resonant cavities. A preliminary measurement following alignment of the apparatus sets an upper limit of 575 on the parameter C/sub 2/p, which gives the strength of the PNC-induced mixing of the ..beta../sub 0/ (2s/sub 1///sub 2/) and e/sub 0/ (2p/sub 1///sub 2/) states. The prediction of the standard model, including radiative corrections, is C/sub 2/p = 0.08 +/- 0.037.

  7. Formulation and Application of a Physically-Based Rupture Probability Model for Large Earthquakes on Subduction Zones: A Case Study of Earthquakes on Nazca Plate

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.

    2014-12-01

    Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically

  8. Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps

    NASA Astrophysics Data System (ADS)

    Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.

    2014-12-01

    Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.

  9. Theory and application of experimental model analysis in earthquake engineering

    NASA Astrophysics Data System (ADS)

    Moncarz, P. D.

    The feasibility and limitations of small-scale model studies in earthquake engineering research and practice is considered with emphasis on dynamic modeling theory, a study of the mechanical properties of model materials, the development of suitable model construction techniques and an evaluation of the accuracy of prototype response prediction through model case studies on components and simple steel and reinforced concrete structures. It is demonstrated that model analysis can be used in many cases to obtain quantitative information on the seismic behavior of complex structures which cannot be analyzed confidently by conventional techniques. Methodologies for model testing and response evaluation are developed in the project and applications of model analysis in seismic response studies on various types of civil engineering structures (buildings, bridges, dams, etc.) are evaluated.

  10. Nonconservative dynamics of optically trapped high-aspect-ratio nanowires

    NASA Astrophysics Data System (ADS)

    Toe, Wen Jun; Ortega-Piwonka, Ignacio; Angstmann, Christopher N.; Gao, Qiang; Tan, Hark Hoe; Jagadish, Chennupati; Henry, Bruce I.; Reece, Peter J.

    2016-02-01

    We investigate the dynamics of high-aspect-ratio nanowires trapped axially in a single gradient force optical tweezers. A power spectrum analysis of the dynamics reveals a broad spectral resonance of the order of kHz with peak properties that are strongly dependent on the input trapping power. A dynamical model incorporating linear restoring optical forces, a nonconservative asymmetric coupling between translational and rotational degrees of freedom, viscous drag, and white noise provides an excellent fit to experimental observations. A persistent low-frequency cyclical motion around the equilibrium trapping position, with a frequency distinct from the spectral resonance, is observed from the time series data.

  11. Renormalized dissipation in the nonconservatively forced Burgers equation

    SciTech Connect

    Krommes, J.A.

    2000-01-19

    A previous calculation of the renormalized dissipation in the nonconservatively forced one-dimensional Burgers equation, which encountered a catastrophic long-wavelength divergence approximately [k min]-3, is reconsidered. In the absence of velocity shear, analysis of the eddy-damped quasi-normal Markovian closure predicts only a benign logarithmic dependence on kmin. The original divergence is traced to an inconsistent resonance-broadening type of diffusive approximation, which fails in the present problem. Ballistic scaling of renormalized pulses is retained, but such scaling does not, by itself, imply a paradigm of self-organized criticality. An improved scaling formula for a model with velocity shear is also given.

  12. Modeling warning times for the Israel's earthquake early warning system

    NASA Astrophysics Data System (ADS)

    Pinsky, Vladimir

    2015-01-01

    In June 2012, the Israeli government approved the offer of the creation of an earthquake early warning system (EEWS) that would provide timely alarms for schools and colleges in Israel. A network configuration was chosen, consisting of a staggered line of ˜100 stations along the main regional faults: the Dead Sea fault and the Carmel fault, and an additional ˜40 stations spread more or less evenly over the country. A hybrid approach to the EEWS alarm was suggested, where a P-wave-based system will be combined with the S-threshold method. The former utilizes first arrivals to several stations closest to the event for prompt location and determination of the earthquake's magnitude from the first 3 s of the waveform data. The latter issues alarms, when the acceleration of the surface movement exceeds a threshold for at least two neighboring stations. The threshold will be chosen to be a peak acceleration level corresponding to a magnitude 5 earthquake at a short distance range (5-10 km). The warning times or lead times, i.e., times between the alarm signal arrival and arrival of the damaging S-waves, are considered for the P, S, and hybrid EEWS methods. For each of the approaches, the P- and the S-wave travel times and the alarm times were calculated using a standard 1D velocity model and some assumptions regarding the EEWS data latencies. Then, a definition of alarm effectiveness was introduced as a measure of the trade-off between the warning time and the shaking intensity. A number of strong earthquake scenarios, together with anticipated shaking intensities at important targets, namely cities with high populations, are considered. The scenarios demonstrated in probabilistic terms how the alarm effectiveness varies depending on the target distance from the epicenter and event magnitude.

  13. Modeling warning times for the Israel's earthquake early warning system

    NASA Astrophysics Data System (ADS)

    Pinsky, Vladimir

    2014-09-01

    In June 2012, the Israeli government approved the offer of the creation of an earthquake early warning system (EEWS) that would provide timely alarms for schools and colleges in Israel. A network configuration was chosen, consisting of a staggered line of ˜100 stations along the main regional faults: the Dead Sea fault and the Carmel fault, and an additional ˜40 stations spread more or less evenly over the country. A hybrid approach to the EEWS alarm was suggested, where a P-wave-based system will be combined with the S-threshold method. The former utilizes first arrivals to several stations closest to the event for prompt location and determination of the earthquake's magnitude from the first 3 s of the waveform data. The latter issues alarms, when the acceleration of the surface movement exceeds a threshold for at least two neighboring stations. The threshold will be chosen to be a peak acceleration level corresponding to a magnitude 5 earthquake at a short distance range (5-10 km). The warning times or lead times, i.e., times between the alarm signal arrival and arrival of the damaging S-waves, are considered for the P, S, and hybrid EEWS methods. For each of the approaches, the P- and the S-wave travel times and the alarm times were calculated using a standard 1D velocity model and some assumptions regarding the EEWS data latencies. Then, a definition of alarm effectiveness was introduced as a measure of the trade-off between the warning time and the shaking intensity. A number of strong earthquake scenarios, together with anticipated shaking intensities at important targets, namely cities with high populations, are considered. The scenarios demonstrated in probabilistic terms how the alarm effectiveness varies depending on the target distance from the epicenter and event magnitude.

  14. Modeling of regional earthquakes, aseismic deformation and fault patterns

    NASA Astrophysics Data System (ADS)

    Lyakhovsky, V.; Ben-Zion, Y.

    2005-12-01

    We study the coupled evolution of earthquakes and faults in a 3-D lithospheric model consisting of a weak sedimentary layer over a crystalline crust and upper mantle. The total strain tensor in each layer is the sum of (1) elastic strain, (2) damage-related inelastic strain, and (3) ductile strain. We use a visco-elastic damage rheology model (Lyakhovsky et al., 1997; Hamiel et al., 2004) to calculate elastic strain coupled with evolving material damage and damage-related inelastic strain accumulation. A thermodynamically based equation for damage evolution accounts for degradation and healing as a function of the elastic strain tensor and material properties (rate coefficients and ratio of strain invariants separating states of degradation and healing). Analyses of stress-strain, acoustic emission and frictional data provide constraints on the damage model parameters. The ductile strain in the sedimentary layer is governed by Newtonian viscosity, while power-law rheology is used for the ductile strain in the lower crust and upper mantle. Each mechanism of strain and damage evolution is associated with its own timescale. In our previous study of earthquakes and faults in a 2-D model with averaged stress distribution over the seismogenic zone (thin sheet approximation) we demonstrated effects associated with the ratio between time scales for damage healing and for tectonic loading. The results indicated that low ratio leads to the development of geometrically regular fault systems and the characteristic frequency-size earthquake statistics, while high ratio leads to the development of a network of disordered fault systems and the Gutenberg-Richter statistics. Stress relaxation through ductile creep and damage-related strain mechanisms is associated with two additional time scales. In contrast to the previous 2-D model, the thickness of the seismogenic zone is not prescribed by the model set-up, but is a function of the ratio between timescale of damage accumulation

  15. Comparison of Short-term and Long-term Earthquake Forecast Models for Southern California

    NASA Astrophysics Data System (ADS)

    Helmstetter, A.; Kagan, Y. Y.; Jackson, D. D.

    2004-12-01

    Many earthquakes are triggered in part by preceding events. Aftershocks are the most obvious examples, but many large earthquakes are preceded by smaller ones. The large fluctuations of seismicity rate due to earthquake interactions thus provide a way to improve earthquake forecasting significantly. We have developed a model to estimate daily earthquake probabilities in Southern California, using the Epidemic Type Earthquake Sequence model [Kagan and Knopoff, 1987; Ogata, 1988]. The forecasted seismicity rate is the sum of a constant external loading and of the aftershocks of all past earthquakes. The background rate is estimated by smoothing past seismicity. Each earthquake triggers aftershocks with a rate that increases exponentially with its magnitude and which decreases with time following Omori's law. We use an isotropic kernel to model the spatial distribution of aftershocks for small (M≤5.5) mainshocks, and a smoothing of the location of early aftershocks for larger mainshocks. The model also assumes that all earthquake magnitudes follow the Gutenberg-Richter law with a unifom b-value. We use a maximum likelihood method to estimate the model parameters and tests the short-term and long-term forecasts. A retrospective test using a daily update of the forecasts between 1985/1/1 and 2004/3/10 shows that the short-term model decreases the uncertainty of an earthquake occurrence by a factor of about 10.

  16. Optimized volume models of earthquake-triggered landslides

    PubMed Central

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  17. Optimized volume models of earthquake-triggered landslides

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  18. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  19. Recurrence time distributions of large earthquakes in conceptual model studies

    NASA Astrophysics Data System (ADS)

    Zoeller, G.; Hainzl, S.

    2007-12-01

    The recurrence time distribution of large earthquakes in seismically active regions is a crucial ingredient for seismic hazard assessment. However, due to sparse observational data and a lack of knowledge on the precise mechanisms controlling seismicity, this distribution is unknown. In many practical applications of seismic hazard assessment, the Brownian passage time (BPT) distribution (or a different distribution) is fitted to a small number of observational recurrence times. Here, we study various aspects of recurrence time distributions in conceptual models of individual faults and fault networks: First, the dependence of the recurrence time distribution on the fault interaction is investigated by means of a network of Brownian relaxation oscillators. Second, the Brownian relaxation oscillator is modified towards a model for large earthquakes, taking into account also the statistics of intermediate events in a more appropriate way. This model simulates seismicity in a fault zone consisting of a major fault and some surrounding smaller faults with Gutenberg-Richter type seismicity. This model can be used for more realistic and robust estimations of the real recurrence time distribution in seismic hazard assessment.

  20. Mathematical models for estimating earthquake casualties and damage cost through regression analysis using matrices

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Bautista, L. A.; Baccay, E. B.

    2014-04-01

    The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. The mathematical models made are as follows: This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reductionin the Philippines, which is an earthquake-prone country.

  1. Salient Features of the 2015 Gorkha, Nepal Earthquake in Relation to Earthquake Cycle and Dynamic Rupture Models

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.

    2015-12-01

    Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity

  2. Newmark displacement model for landslides induced by the 2013 Ms 7.0 Lushan earthquake, China

    NASA Astrophysics Data System (ADS)

    Yuan, Renmao; Deng, Qinghai; Cunningham, Dickson; Han, Zhujun; Zhang, Dongli; Zhang, Bingliang

    2016-01-01

    Predicting approximate earthquake-induced landslide displacements is helpful for assessing earthquake hazards and designing slopes to withstand future earthquake shaking. In this work, the basic methodology outlined by Jibson (1993) is applied to derive the Newmark displacement of landslides based on strong ground-motion recordings during the 2013 Lushan Ms 7.0 earthquake. By analyzing the relationships between Arias intensity, Newmark displacement, and critical acceleration of the Lushan earthquake, formulas of the Jibson93 and its modified models are shown to be applicable to the Lushan earthquake dataset. Different empirical equations with new fitting coefficients for estimating Newmark displacement are then developed for comparative analysis. The results indicate that a modified model has a better goodness of fit and a smaller estimation error for the Jibson93 formula. It indicates that the modified model may be more reasonable for the dataset of the Lushan earthquake. The analysis of results also suggests that a global equation is not ideally suited to directly estimate the Newmark displacements of landslides induced by one specific earthquake. Rather it is empirically better to perform a new multivariate regression analysis to derive new coefficients for the global equation using the dataset of the specific earthquake. The results presented in this paper can be applied to a future co-seismic landslide hazard assessment to inform reconstruction efforts in the area affected by the 2013 Lushan Ms 7.0 earthquake, and for future disaster prevention and mitigation.

  3. Relativistic Coupled-Cluster Theory of Atomic Parity Nonconservation: Application to {sup 137}Ba{sup +}

    SciTech Connect

    Sahoo, Bijaya K.; Chaudhuri, Rajat; Das, B. P.; Mukherjee, Debashis

    2006-04-28

    We report the result of our ab initio calculation of the 6s{sup 2}S{sub 1/2}{yields}5d{sup 2}D{sub 3/2} parity nonconserving electric dipole transition amplitude in {sup 137}Ba{sup +} based on relativistic coupled-cluster theory. Considering single, double, and partial triple excitations, we have achieved an accuracy of less than 1%. If the accuracy of our calculation can be matched by the proposed parity nonconservation experiment in Ba{sup +} for the above transition, then the combination of the two results would provide an independent nonaccelerator test of the standard model of particle physics.

  4. VLF subionospheric disturbances associated with earthquakes: Observations and numerical modeling

    NASA Astrophysics Data System (ADS)

    Hobara, Y.; Iwamoto, M.; Ohta, K.; Hayakawa, M.

    2011-12-01

    Recently many experimental results have been reported concerning the ionospheric perturbation associated with major earthquakes. VLF/LF transmitter signal received by network observations are used to detect seismo-ionospheric signatures such as amplitude and phase anomalies. These signatures are due to the ionospheric perturbation located around the transmitter and receivers. However the physical properties of the perturbation such as electron density, spatial scale, and location have not been understood well. In this paper we performed the numerical modeling of the subionosperic VLF/LF signals including the various conditions of seismo-ionospheric perturbations by using a two-dimensional finite-difference time-domain (FDTD) method to determine the perturbation properties. The amplitude and phase for the various cases of an ionospheric perturbation are calculated relative to the normal condition (without perturbation) as functions of distance from the transmitter and distance between the transmitter and perturbation. These numerical results are compared with our observation. As a result, we found that the received transmitter amplitude depends greatly on the distance between the transmitter and ionopsheric perturbation, on the spaticl scale and height of the perturbations. Moreover results of modeled ionospheric perturbation for the recent 2011 off the pacific coast of Tohoku earthquake are compared with those from our VLF network experiment.

  5. Numerical earthquake model of the 31 October 2013 Ruisui, Taiwan, earthquake: Source rupture process and seismic wave propagation

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Huang, Hsin-Hua; Shyu, J. Bruce H.; Yeh, Te-Yang; Lin, Tzu-Chi

    2014-12-01

    We build a numerical earthquake model, including numerical source and wave propagation models, to understand the rupture process and the ground motion time history of the 2013 ML 6.4 Ruisui earthquake in Taiwan. This moderately large event was located in the Longitudinal Valley, a suture zone of the Philippine Sea Plate and the Eurasia Plate. A joint source inversion analysis by using teleseismic body wave, GPS coseismic displacement and near field ground motion data was performed first. The inversion results derived from a western dipping fault plane indicate that the slip occurred in depths between 10 and 20 km. The rupture propagated from south to north and two asperities were resolved. The largest one was located approximately 15 km north of the epicenter with a maximum slip about 1 m. A 3D seismic wave propagation simulation based on the spectral-element method was then carried out by using the inverted source model. A strong rupture directivity effect in the northern area of the Longitudinal Valley was found, which was due to the northward rupture process. Forward synthetic waveforms could explain most of the near-field ground motion data for frequencies between 0.05 and 0.2 Hz. This numerical earthquake model not only helps us confirm the detailed rupture processes on the Central Range Fault but also gives contribution to regional seismic hazard mitigation for future large earthquakes.

  6. Numerical Earthquake Model of the 31 October 2013 Ruisui, Taiwan, Earthquake: Source Rupture Process and Seismic Wave Propagation

    NASA Astrophysics Data System (ADS)

    Lee, S. J.; Huang, H. H.; Shyu, J. B. H.; Lin, T. C.; Yeh, T. Y.

    2014-12-01

    We build a numerical earthquake model, including numerical source and wave propagation models, to understand the rupture process and the ground motion time history of the 2013 ML 6.4 Ruisui earthquake in Taiwan. This moderately large event was located in the Longitudinal Valley, a suture zone of the Philippine Sea Plate and the Eurasia Plate. A joint source inversion analysis by using teleseismic body wave, GPS coseismic displacement and near field ground motion data was performed first. The inversion results derived from a western dipping fault plane indicate that the slip occurred in depths between 10 and 20 km. The rupture propagated from south to north and caused two asperities. The largest one located approximately 15 km north of the epicenter with a maximum slip about 1 m. A 3D seismic wave propagation simulation based on the spectral-element method was then carried out by using the inverted source model. A strong rupture directivity effect in the northern area of the Longitudinal Valley was found, which was due to the northward rupture process. Forward synthetic waveforms could explain most of the near-field ground motion data for frequencies between 0.05 and 0.2 Hz. This numerical earthquake model not only helps us confirm the detailed rupture processes on the Central Range Fault but also gives contribution to regional seismic hazard mitigation for future large earthquakes.

  7. New model on the relations between surface uplift and erosion caused by large, compressional earthquakes

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2015-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new, seismologically consistent model of earthquake induced landsliding allows us to explore the importance of parameters such as earthquake depth and landscape steepness. In order to assess the earthquake mass balance for various scenarios, we have compared the expected eroded volume with co-seismic surface uplift computed with Okada's deformation theory. We have found the earthquake depth and landscape steepness to be dominant parameters compared to the fault geometry (dip and rake). In contrast with previous studies we have found that the largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. We have explored the long term evolution of topography under seismic forcing, with a Gutenberg Richter distribution or a characteristic earthquake model, on a fault system with different geometries and tectonic styles, such as transpressive or flat-and-ramp geometry, with thinned or thickened seismogenic layer.

  8. The Global Earthquake Model and Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Smolka, A. J.

    2015-12-01

    Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all

  9. A Hidden Markov Approach to Modeling Interevent Earthquake Times

    NASA Astrophysics Data System (ADS)

    Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.

    2003-12-01

    A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.

  10. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  11. Short-term forecasting of Taiwanese earthquakes using a universal model of fusion-fission processes.

    PubMed

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M A; Johnson, Neil F

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  12. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  13. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  14. Phase response curves for models of earthquake fault dynamics

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-06-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  15. Phase response curves for models of earthquake fault dynamics.

    PubMed

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-06-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period. PMID:27368770

  16. Parity nonconservation in the hydrogen atom

    SciTech Connect

    Chupp, T.E.

    1983-01-01

    The development of experiments to detect parity nonconserving (PNC) mixing of the 2s/sub a/2/ and 2p/sub 1/2/ levels of the hydrogen atom in a 570 Gauss magnetic field is described. The technique involves observation of an asymmetry in the rate of microwave induced transitions at 1608 MHz due to the interference of two amplitudes, one produced by applied microwave and static electric fields and the other produced by an applied microwave field and the 2s/sub 1/2/-2p/sub 1/2/ mixing inducd by a PNC Hamiltonian.

  17. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency

  18. Quasi-dynamic modeling of earthquake failure, a comparison between two earthquake simulators, and application to the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2010-12-01

    We show results of an ongoing project that aims at better understanding the earthquake interactions within fault networks in order to improve seismic hazard estimations with application to the test region of the Lower Rhine Embayment (Germany). Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, physics-based fault models would also allow for physical interpretations of the described seismicity. At first we present a comparison of two different quasi-dynamic earthquake simulators for a setup of two parallel strike-slip faults. The first fault model is based on a simple quasi-static approach with coulomb-failure criterion (Ben-Zion and Rice (1993)), but with the extension of having a finite communication speed of stress-transfer as introduced by Zöller (2004). The finite communication speed introduces a time scale and therefore makes the model quasi-dynamic. The second fault model is based on rate- and state-dependent friction as introduced by Dietrich (1995). We compare the spatio-temporal behavior of both models. In a second step we show results for a setup of a graben-structure (two parallel normal faults). And finally we show characteristics of a fault system located in the Lower Rhine Embayment using the rate- and state-model. We also test the ability of statistical recurrence time models (Brownian relaxation oscillators and stress release models) to capture the first-order characteristics of the earthquake simulations.

  19. A graph theoretic approach to global earthquake sequencing: A Markov chain model

    NASA Astrophysics Data System (ADS)

    Vasudevan, K.; Cavers, M. S.

    2012-12-01

    We construct a directed graph to represent a Markov chain of global earthquake sequences and analyze the statistics of transition probabilities linked to earthquake zones. For earthquake zonation, we consider the simplified plate boundary template of Kagan, Bird, and Jackson (KBJ template, 2010). We demonstrate the applicability of the directed graph approach to hazard-related forecasting using some of the properties of graphs that represent the finite Markov chain. We extend the present study to consider Bird's 52-plate zonation (2003) describing the global earthquakes at and within plate boundaries to gain further insight into the usefulness of digraphs corresponding to a Markov chain model.

  20. A nonconservative scheme for isentropic gas dynamics

    SciTech Connect

    Chen, Gui-Qiang |; Liu, Jian-Guo

    1994-05-01

    In this paper, we construct a second-order nonconservative for the system of isentropic gas dynamics to capture the physical invariant regions for preventing negative density, to treat the vacuum singularity, and to control the local entropy from dramatically increasing near shock waves. The main difference in the construction of the scheme discussed here is that we use piecewise linear functions to approximate the Riemann invariants w and z instead of the physical variables {rho} and m. Our scheme is a natural extension of the schemes for scalar conservation laws and it can be numerical implemented easily because the system is diagonalized in this coordinate system. Another advantage of using Riemann invariants is that the Hessian matrix of any weak entropy has no singularity in the Riemann invariant plane w-z, whereas the Hessian matrices of the weak entropies have singularity at the vacuum points in the physical plane p-m. We prove that this scheme converges to an entropy solution for the Cauchy problem with L{sup {infinity}} initial data. By convergence here we mean that there is a subsequent convergence to a generalized solution satisfying the entrophy condition. As long as the entropy solution is unique, the whole sequence converges to a physical solution. This shows that this kind of scheme is quite reliable from theoretical view of point. In addition to being interested in the scheme itself, we wish to provide an approach to rigorously analyze nonconservative finite difference schemes.

  1. The OPAL Project: Open source Procedure for Assessment of Loss using Global Earthquake Modelling software

    NASA Astrophysics Data System (ADS)

    Daniell, James

    2010-05-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure has been developed to provide a framework for optimisation of a Global Earthquake Modelling process through: 1) Overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost and technology); 2) Preliminary research, acquisition and familiarisation with all available ELE software packages; 3) Assessment of these 30+ software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4) Loss analysis for a deterministic earthquake (Mw7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment), a capacity spectrum based method HAZUS (HAZards United States) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach) software which was adapted for use in order to compare the different processes needed for the production of damage, economic and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data. Keywords: OPAL, displacement-based, DBELA, earthquake loss estimation, earthquake loss assessment, open source, HAZUS

  2. Cooling magma model for deep volcanic long-period earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, Naofumi; Tsai, Victor C.

    2014-11-01

    Deep long-period events (DLP events) or deep low-frequency earthquakes (deep LFEs) are deep earthquakes that radiate low-frequency seismic waves. While tectonic deep LFEs on plate boundaries are thought to be slip events, there have only been a limited number of studies on the physical mechanism of volcanic DLP events around the Moho (crust-mantle boundary) beneath volcanoes. One reasonable mechanism capable of producing their initial fractures is the effect of thermal stresses. Since ascending magma diapirs tend to stagnate near the Moho, where the vertical gradient of density is high, we suggest that cooling magma may play an important role in volcanic DLP event occurrence. Assuming an initial thermal perturbation of 400°C within a tabular magma of half width 41 m or a cylindrical magma of 74 m radius, thermal strain rates within the intruded magma are higher than tectonic strain rates of ~ 10-14 s-1 and produce a total strain of 2 × 10-4. Shear brittle fractures generated by the thermal strains can produce a compensated linear vector dipole mechanism as observed and potentially also explain the harmonic seismic waveforms from an excited resonance. In our model, we predict correlation between the particular shape of the cluster and the orientation of focal mechanisms, which is partly supported by observations of Aso and Ide (2014). To assess the generality of our cooling magma model as a cause for volcanic DLP events, additional work on relocations and focal mechanisms is essential and would be important to understanding the physical processes causing volcanic DLP events.

  3. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  4. Modeling subduction megathrust earthquakes: Insights from a visco-elasto-plastic analog model

    NASA Astrophysics Data System (ADS)

    Dominguez, Stéphane; Malavieille, Jacques; Mazzotti, Stéphane; Martin, Nicolas; Caniven, Yannick; Cattin, Rodolphe; Soliva, Roger; Peyret, Michel; Lallemand, Serge

    2015-04-01

    As illustrated recently by the 2004 Sumatra-Andaman or the 2011 Tohoku earthquakes, subduction megathrust earthquakes generate heavy economic and human losses. Better constrain how such destructive seismic events nucleate and generate crustal deformations represents a major societal issue but appears also as a difficult scientific challenge. Indeed, several limiting factors, related to the difficulty to analyze deformation undersea, to access deep source of earthquake and to integrate the characteristic time scales of seismic processes, must be overcome first. With this aim, we have developed an experimental approach to complement numerical modeling techniques that are classically used to analyze available geological and geophysical observations on subduction earthquakes. Objectives were to design a kinematically and mechanically first-order scaled analogue model of a subduction zone capable of reproducing megathrust earthquakes but also realistic seismic cycle deformation phases. The model rheology is based on multi-layered visco-elasto-plastic materials to take into account the mechanical behavior of the overriding lithospheric plate. The elastic deformation of the subducting oceanic plate is also simulated. The seismogenic zone is characterized by a frictional plate interface whose mechanical properties can be adjusted to modify seismic coupling. Preliminary results show that this subduction model succeeds in reproducing the main deformation phases associated to the seismic cycle (interseismic elastic loading, coseismic rupture and post-seismic relaxation). By studying model kinematics and mechanical behavior, we expect to improve our understanding of seismic deformation processes and better constrain the role of physical parameters (fault friction, rheology, ...) as well as boundary conditions (loading rate,...) on seismic cycle and megathrust earthquake dynamics. We expect that results of this project will lead to significant improvement on interpretation of

  5. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  6. Finite-Source Modeling for Parkfield and Anza Earthquakes

    NASA Astrophysics Data System (ADS)

    Wooddell, K. E.; Taira, T.; Dreger, D. S.

    2014-12-01

    Repeating earthquakes occur in the vicinity of creeping sections along the Parkfield section of the San Andreas fault (Nadeau et al., 1995) and the Anza section of the San Jacinto fault (Taira, 2013). Uilizing an empirical Green's function (eGF) approach for both the Parkfield and Anza events, we are able to conduct a comparative study of the resulting slip distributions and source parameters to examine differences in the scaling of fault dimension, average slip, and peak-slip with magnitude. Following the approach of Dreger et al. (2007), moment rate functions (MRFs) are obtained at each station for both Parkfield and Anza earthquakes using a spectral domain deconvolution approach where the complex spectrum of the eGF is divided out of the complex spectrum of the target event. Spatial distributions of fault slip are derived by inverting the MRFs, and the coseismic stress change is computed following the method of Ripperger and Mai (2004). Initial results are based on the analysis of several Parkfield target events ranging in magnitude from Mw1.8 to 6.0 (Dreger et al., 2011) and a Mw4.7 Anza event. Parkfield peak slips are consistent with the Nadeau and Johnson (1998) tectonic loading model, while average slips tend to scale self-similarly. Results for the Anza event show very high peak and average slips, in exceedance of 50 cm and 10 cm respectively. Directivity for this event is in the northwest direction, and preliminary sensitivity analyses suggest that the rupture velocity is near the shear wave velocity and the rise time is short (~0.03 sec). Multiple eGFs for the Anza event have been evaluated and the results appear robust.

  7. Earthquake Response Modeling for a Parked and Operating Megawatt-Scale Wind Turbine

    SciTech Connect

    Prowell, I.; Elgamal, A.; Romanowitz, H.; Duggan, J. E.; Jonkman, J.

    2010-10-01

    Demand parameters for turbines, such as tower moment demand, are primarily driven by wind excitation and dynamics associated with operation. For that purpose, computational simulation platforms have been developed, such as FAST, maintained by the National Renewable Energy Laboratory (NREL). For seismically active regions, building codes also require the consideration of earthquake loading. Historically, it has been common to use simple building code approaches to estimate the structural demand from earthquake shaking, as an independent loading scenario. Currently, International Electrotechnical Commission (IEC) design requirements include the consideration of earthquake shaking while the turbine is operating. Numerical and analytical tools used to consider earthquake loads for buildings and other static civil structures are not well suited for modeling simultaneous wind and earthquake excitation in conjunction with operational dynamics. Through the addition of seismic loading capabilities to FAST, it is possible to simulate earthquake shaking in the time domain, which allows consideration of non-linear effects such as structural nonlinearities, aerodynamic hysteresis, control system influence, and transients. This paper presents a FAST model of a modern 900-kW wind turbine, which is calibrated based on field vibration measurements. With this calibrated model, both coupled and uncoupled simulations are conducted looking at the structural demand for the turbine tower. Response is compared under the conditions of normal operation and potential emergency shutdown due the earthquake induced vibrations. The results highlight the availability of a numerical tool for conducting such studies, and provide insights into the combined wind-earthquake loading mechanism.

  8. Acceleration modeling of moderate to large earthquakes based on realistic fault models

    NASA Astrophysics Data System (ADS)

    Arvidsson, R.; Toral, J.

    2003-04-01

    Strong motion is affected by distance to the earthquake, local crustal structure, focal mechanism, azimuth to the source. However, the faulting process is also of importance such as development of rupture, i.e., directivity, slip distribution on the fault, extent of fault, rupture velocity. We have modelled these parameters for earthquakes that occurred in three tectonic zones close to the Panama Canal. We included in the modeling directivity, distributed slip, discrete faulting, fault depth and expected focal mechanism. The distributed slip is based on previous fault models that we produced from the region of other earthquakes. Such previous examples show that maximum intensities in some cases coincides with areas of high slip on the fault. Our acceleration modeling also gives similar values to the few observations that have been made for moderate to small earthquakes in the range M=5-6.2. The modeling indicates that events located in the Caribbean might cause strong motion in the lower frequency spectra where high frequency Rayleigh waves dominates.

  9. Information Theoric Framework for the Earthquake Recurrence Models : Methodica Firma Per Terra Non-Firma

    NASA Astrophysics Data System (ADS)

    Esmer, Özcan

    2006-11-01

    This paper first evaluates the earthquake prediction method (1999 ) used by US Geological Survey as the lead example and reviews also the recent models. Secondly, points out the ongoing debate on the predictability of earthquake recurrences and lists the main claims of both sides. The traditional methods and the "frequentist" approach used in determining the earthquake probabilities cannot end the complaints that the earthquakes are unpredictable. It is argued that the prevailing "crisis" in seismic research corresponds to the Pre-Maxent Age of the current situation. The period of Kuhnian "Crisis" should give rise to a new paradigm based on the Information-Theoric framework including the inverse problem, Maxent and Bayesian methods. Paper aims to show that the information- theoric methods shall provide the required "Methodica Firma" for the earthquake prediction models.

  10. Parity nonconservation in radioactive atoms: An experimental perspective

    SciTech Connect

    Vieira, D.

    1994-11-01

    The measurement of parity nonconservation (PNC) in atoms constitutes an important test of electroweak interactions in nuclei. Great progress has been made over the last 20 years in performing these measurements with ever increasing accuracies. To date the experimental accuracies have reached a level of 1 to 2%. In all cases, except for cesium, the theoretical atomic structure uncertainties now limit the comparison of these measurements to the predictions of the standard model. New measurements involving the ratio of Stark interference transition rates for a series of Cs or Fr radioisotopes are foreseen as a way of eliminating these atomic structure uncertainties. The use of magneto-optical traps to collect and concentrate the much smaller number of radioactive atoms that are produced is considered to be one of the key steps in realizing these measurements. Plans for how these measurements will be done and progress made to date are outlined.

  11. Dynamic earthquake rupture modelled with an unstructured 3-D spectral element method applied to the 2011 M9 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Galvez, P.; Ampuero, J.-P.; Dalguer, L. A.; Somala, S. N.; Nissen-Meyer, T.

    2014-08-01

    An important goal of computational seismology is to simulate dynamic earthquake rupture and strong ground motion in realistic models that include crustal heterogeneities and complex fault geometries. To accomplish this, we incorporate dynamic rupture modelling capabilities in a spectral element solver on unstructured meshes, the 3-D open source code SPECFEM3D, and employ state-of-the-art software for the generation of unstructured meshes of hexahedral elements. These tools provide high flexibility in representing fault systems with complex geometries, including faults with branches and non-planar faults. The domain size is extended with progressive mesh coarsening to maintain an accurate resolution of the static field. Our implementation of dynamic rupture does not affect the parallel scalability of the code. We verify our implementation by comparing our results to those of two finite element codes on benchmark problems including branched faults. Finally, we present a preliminary dynamic rupture model of the 2011 Mw 9.0 Tohoku earthquake including a non-planar plate interface with heterogeneous frictional properties and initial stresses. Our simulation reproduces qualitatively the depth-dependent frequency content of the source and the large slip close to the trench observed for this earthquake.

  12. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    NASA Astrophysics Data System (ADS)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  13. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    SciTech Connect

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  14. Application of linear statistical models of earthquake magnitude versus fault length in estimating maximum expectable earthquakes

    USGS Publications Warehouse

    Mark, Robert K.

    1977-01-01

    Correlation or linear regression estimates of earthquake magnitude from data on historical magnitude and length of surface rupture should be based upon the correct regression. For example, the regression of magnitude on the logarithm of the length of surface rupture L can be used to estimate magnitude, but the regression of log L on magnitude cannot. Regression estimates are most probable values, and estimates of maximum values require consideration of one-sided confidence limits.

  15. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the

  16. Fault slip model of the historical 1797 earthquake on the Mentawai segment of the Sunda Megathrust

    NASA Astrophysics Data System (ADS)

    Lubis, A.; Hill, E. M.; Philibosian, B.; Meltzner, A. J.; Barbot, S.; Sieh, K. E.

    2012-12-01

    Paleogeodetic observations from coral reef studies have provided estimates of coseismic deformation associated with two great earthquakes on the Sunda megathrust, in 1797 and 1833. Since the corals die when they are uplifted, they do not record the full coseismic displacement. In previous work [Natawidjaja et al., 2006], co-seismic offsets were estimated by linear extrapolation of inter-seismic coral data. This meant that they ignored any post-seismic deformation, despite the fact that it could contribute significantly to displacement of the coral. Here, we use the earthquake cycle model (Sato et al., 2006) to estimate a slip distribution for the historical 1797 earthquake on the Mentawai segment. After calculating model parameters related to the earthquake cycle model, with an assumed earth structure, we use the ABIC inversion algorithm (Yabuki and Matsuura, 1992) to invert the coral datasets. We find that the slip distribution is concentrated in two main asperities, and it is consistent with the present coupling area (Chlieh et al. 2008). A significant slip patch of ~5 m is imaged at the northern end of the slip region of the September 2007 Mw 8.4 earthquake, and a second smaller asperity beneath Siberut island. Based on this model, the moment magnitude (Mw) for the 1797 earthquake is estimated to be 8.4. In order to validate our source model, we will use earthquake cycle 3D-FEM to reconstruct the displacement time series as measured by the corals.

  17. Numerical modelling of iceberg calving force responsible for glacial earthquakes

    NASA Astrophysics Data System (ADS)

    Sergeant, Amandine; Yastrebov, Vladislav; Castelnau, Olivier; Mangeney, Anne; Stutzmann, Eleonore; Montagner, Jean-Paul

    2016-04-01

    Glacial earthquakes is a class of seismic events of magnitude up to 5, occurring primarily in Greenland, in the margins of large marine-terminated glaciers with near-grounded termini. They are caused by calving of cubic-kilometer scale unstable icebergs which penetrate the full-glacier thickness and, driven by the buoyancy forces, capsize against the calving front. These phenomena produce seismic energy including surface waves with dominant energy between 10-150 s of period whose seismogenic source is compatible with the contact force exerted on the terminus by the iceberg while it capsizes. A reverse motion and posterior rebound of the terminus have also been measured and associated with the fluctuation of this contact force. Using a finite element model of iceberg and glacier terminus coupled with simplified fluid-structure interaction model, we simulate calving and capsize of icebergs. Contact and frictional forces are measured on the terminus and compared with laboratory experiments. We also study the influence of geometric factors on the force history, amplitude and duration at the laboratory and field scales. We show first insights into the force and the generated seismic waves exploring different scenarios for iceberg capsizing.

  18. Precise measurement of parity nonconservation in atomic thallium

    SciTech Connect

    Hunter, L.R.

    1981-05-01

    Observation of parity non-conservation in the 6P/sub 1/2/ - 7P/sub 1/2/ transition in /sub 81/Tl/sup 203/ /sup 205/ is reported. The transition is nominally forbidden M1 with amplitude M. Due to the violation of parity in the electron-nucleon interaction, the transition acquires an additional (parity nonconserving) amplitude e/sub p/. In the presence of an electric field, incident 293 nm circularly polarized light results in a polarization of the 7P/sub 1/2/ state through interference of the Stark amplitude with M and E/sub p/. This polarization is observed by selective excitation of the 7P/sub 1/2/ - (8S/sub 1/2) transition with circularly polarized 2.18 ..mu..m light and observation of the subsequent fluorescence at 323 nm. By utilizing this technique and carefully determining possible systematic contributions through auxiliary measurements, the circular dichroism delta = 2Im(E/sub p/)/ M is observed: delta/sub exp/ = (2.8 + 1.0 - .9) x 10/sup -3/. In addition, measurements of A(6D/sub 3/2/ - 7P/sub 1/2/) = (5.97 +- .78) x 10/sup 5/ s/sup -1/, A(7P/sub 1/2/ - 7S/sub 1/2/) = (1.71 +- .07) x 10/sup 7/ s/sup -1/ and A(7P/sub 3/2/ - 7S/sub 1/2/) = (2.37 +- .09) s/sup -1/ are reported. These values are employed in a semiempirical determination of delta based on the Weinberg-Salam Model. The result of this calculation for sin/sup 2/THETA/sub 2/ = .23 is delta/sub Theo/ = 1.7 +- .8) x 10/sup -3/.

  19. Simulational Studies of a 2-DIMENSIONAL Burridge - Model for Earthquakes

    NASA Astrophysics Data System (ADS)

    Ross, John Bernard

    1993-01-01

    A two-dimensional cellular automaton version of the Burridge-Knopoff (BK) model for earthquakes is studied. The model consists of a lattice of blocks connected by springs, subject to static friction and driven at a rate v by an externally applied force. A block ruptures provided that its total stress matches or exceeds static friction. The distance it moves is proportional to the total stress, alpha of which it releases to each of its neighbors and 1 - qalpha<=aves the system, where q is the number of neighbors. The BK model with nearest neighbor (q = 4) and long range (q = 24) interactions is simulated for spatially uniform and random static friction on lattices with periodic, open, closed, and fixed boundary conditions. In the nearest neighbor model, the system appears to have a spinodal critical point at v = v_{c} in all cases except for closed boundaries and uniform thresholds, where the system appears to be self-organized critical. The dynamics of the model is always periodic or quasiperiodic for non-closed boundaries and uniform thresholds. The stress is "quantized" in multiples of the loader force in this case. A mean field theory is presented from which v _{c} and the dominant period of oscillation is derived, which agree well with the data. v_{c} varies inversely with the number of neighbors to which each blocks is attached and, as a result, goes to zero as the range of the springs goes to infinity. This is consistent with the behavior of a spinodal critical point as the range of interactions goes to infinity. The quasistatic limit of tectonic loading is thus recovered.

  20. Use of GPS and InSAR Technology and its Further Development in Earthquake Modeling

    NASA Technical Reports Server (NTRS)

    Donnellan, A.; Lyzenga, G.; Argus, D.; Peltzer, G.; Parker, J.; Webb, F.; Heflin, M.; Zumberge, J.

    1999-01-01

    Global Positioning System (GPS) data are useful for understanding both interseismic and postseismic deformation. Models of GPS data suggest that the lower crust, lateral heterogeneity, and fault slip, all provide a role in the earthquake cycle.

  1. Integrated seismic source model of the 2015 Gorkha, Nepal, earthquake

    NASA Astrophysics Data System (ADS)

    Yagi, Yuji; Okuwaki, Ryo

    2015-08-01

    We compared spatiotemporal slip-rate and high-frequency (around 1 Hz) radiation distributions from teleseismic P wave data to infer the seismic rupture process of the 2015 Gorkha, Nepal, earthquake. For these estimates, we applied a novel waveform inversion formulation that mitigates the effect of Green's functions uncertainty and a hybrid backprojection method that mitigates contamination by depth phases. Our model showed that the dynamic rupture front propagated eastward from the hypocenter at 3.0 km/s and triggered a large-slip event centered about 50 km to the east. It also showed that the large-slip event included a rapid rupture acceleration event and an irregular deceleration of rupture propagation before the rupture termination. Heterogeneity of the stress drop or fracture energy in the eastern part of the rupture area, where aftershock activity was high, inhibited rupture growth. High-frequency radiation sources tended to be in the deeper part of the large-slip area, which suggests that heterogeneity of the stress drop or fracture energy there may have contributed to the damage in and around Kathmandu.

  2. Modeling of electromagnetic E-layer waves before earthquakes

    NASA Astrophysics Data System (ADS)

    Meister, Claudia-Veronika; Hoffmann, Dieter H. H.

    2013-04-01

    A dielectric model for electromagnetic (EM) waves in the Earth's E-layer is developed. It is assumed that these waves are driven by acoustic-type waves, which are caused by earthquake precursors. The dynamics of the plasma system and the EM waves is described using the multi-component magnetohydrodynamic (MHD) theory. The acoustic waves are introduced as neutral gas wind. The momentum transfer between the charged particles in the MHD system is mainly caused via the collisions with the neutral gas. From the MHD system, relations for the velocity fluctuations of the particles are found, which consist of products of the electric field fluctuations times coefficients α which only depend on the plasma background parameters. A quick FORTRAN program is developed, to calculate these coefficients (solution of 9x9-matrix equations). Models of the altitudinal scales of the background plasma parameters and the fluctuations of the plasma parameters and the EM field are introduced. Besides, in case of the electric wave field, a method is obtained to calculate the altitudinal scale ? of the amplitude (based on the Poisson equation and knowing the coefficients α). Finally, a general dispersion relation is found, where α, ? and the altitudinal profile of ? appear as parameters (which were found in the numerical model before). Thus, the dispersion relations of EM waves caused by acoustic-type ones during times of seismic activity may be studied numerically. Besides, an expression for the related temperature fluctuations is derived, which depends on the dispersion of the excited EM waves, α, ? and the background plasma parameters. So, heating processes in the atmosphere may be investigated.

  3. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  4. Building a Framework Earthquake Cycle Deformational Model for Subduction Megathrust Zones: Integrating Observations with Numerical Models

    NASA Astrophysics Data System (ADS)

    Furlong, Kevin P.; Govers, Rob; Herman, Matthew

    2016-04-01

    Subduction zone megathrusts host the largest and deadliest earthquakes on the planet. Over the past decades (primarily since the 2004 Sumatra event) our abilities to observe the build-up in slip deficit along these plate boundary zones has improved substantially with the development of relatively dense observing systems along major subduction zones. One, perhaps unexpected, result from these observations is a range of present-day behavior along the boundaries. Some regions show displacements (almost always observed on the upper plate along the boundary) that are consistent with elastic deformation driven by a fully locked plate interface, while other plate boundary segments (oftentimes along the same plate boundary system) show little or no plate motion directed displacements. This latter case is often interpreted as reflecting little to no coupling along the plate boundary interface. What is unclear is whether this spatial variation in apparent plate boundary interface behavior reflects true spatial differences in plate interface properties and mechanics, or may rather reflect temporal behavior of the plate boundary during the earthquake cycle. In our integrated observational and modeling analyses, we have come to the conclusion that much of what is seen as diverse behavior along subduction margins represents different time in the earthquake cycle (relative to recurrence rate and material properties) rather than fundamental differences between subduction zone mechanics. Our model-constrained conceptual model accounts for the following generalized observations: 1. Coseismic displacements are enhanced in "near-trench" region 2. Post-seismic relaxation varies with time and position landward - i.e. there is a propagation of the transition point from "post" (i.e. trenchward) to "inter" (i.e. landward) seismic displacement behavior. 3. Displacements immediately post-EQ (interpreted to be associated with "after slip" on megathrust?). 4. The post-EQ transient response can

  5. Earthquake sequencing: chimera states with Kuramoto model dynamics on directed graphs

    NASA Astrophysics Data System (ADS)

    Vasudevan, K.; Cavers, M.; Ware, A.

    2015-09-01

    Earthquake sequencing studies allow us to investigate empirical relationships among spatio-temporal parameters describing the complexity of earthquake properties. We have recently studied the relevance of Markov chain models to draw information from global earthquake catalogues. In these studies, we considered directed graphs as graph theoretic representations of the Markov chain model and analyzed their properties. Here, we look at earthquake sequencing itself as a directed graph. In general, earthquakes are occurrences resulting from significant stress interactions among faults. As a result, stress-field fluctuations evolve continuously. We propose that they are akin to the dynamics of the collective behavior of weakly coupled non-linear oscillators. Since mapping of global stress-field fluctuations in real time at all scales is an impossible task, we consider an earthquake zone as a proxy for a collection of weakly coupled oscillators, the dynamics of which would be appropriate for the ubiquitous Kuramoto model. In the present work, we apply the Kuramoto model with phase lag to the non-linear dynamics on a directed graph of a sequence of earthquakes. For directed graphs with certain properties, the Kuramoto model yields synchronization, and inclusion of non-local effects evokes the occurrence of chimera states or the co-existence of synchronous and asynchronous behavior of oscillators. In this paper, we show how we build the directed graphs derived from global seismicity data. Then, we present conditions under which chimera states could occur and, subsequently, point out the role of the Kuramoto model in understanding the evolution of synchronous and asynchronous regions. We surmise that one implication of the emergence of chimera states will lead to investigation of the present and other mathematical models in detail to generate global chimera-state maps similar to global seismicity maps for earthquake forecasting studies.

  6. Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software (OPAL)

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.

    2011-07-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure was created to provide a framework for optimisation of a Global Earthquake Modelling process through: 1. overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost, and technology); 2. preliminary research, acquisition, and familiarisation for available ELE software packages; 3. assessment of these software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4. loss analysis for a deterministic earthquake (Mw = 7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment, Crowley et al., 2006), a capacity spectrum based method HAZUS (HAZards United States, FEMA, USA, 2003) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach, Lindholm et al., 2007) software which was adapted for use in order to compare the different processes needed for the production of damage, economic, and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data.

  7. Modelling the Epistemic Uncertainty in the Vulnerability Assessment Component of an Earthquake Loss Model

    NASA Astrophysics Data System (ADS)

    Crowley, H.; Modica, A.

    2009-04-01

    Loss estimates have been shown in various studies to be highly sensitive to the methodology employed, the seismicity and ground-motion models, the vulnerability functions, and assumed replacement costs (e.g. Crowley et al., 2005; Molina and Lindholm, 2005; Grossi, 2000). It is clear that future loss models should explicitly account for these epistemic uncertainties. Indeed, a cause of frequent concern in the insurance and reinsurance industries is precisely the fact that for certain regions and perils, available commercial catastrophe models often yield significantly different loss estimates. Of equal relevance to many users is the fact that updates of the models sometimes lead to very significant changes in the losses compared to the previous version of the software. In order to model the epistemic uncertainties that are inherent in loss models, a number of different approaches for the hazard, vulnerability, exposure and loss components should be clearly and transparently applied, with the shortcomings and benefits of each method clearly exposed by the developers, such that the end-users can begin to compare the results and the uncertainty in these results from different models. This paper looks at an application of a logic-tree type methodology to model the epistemic uncertainty in the vulnerability component of a loss model for Tunisia. Unlike other countries which have been subjected to damaging earthquakes, there has not been a significant effort to undertake vulnerability studies for the building stock in Tunisia. Hence, when presented with the need to produce a loss model for a country like Tunisia, a number of different approaches can and should be applied to model the vulnerability. These include empirical procedures which utilise observed damage data, and mechanics-based methods where both the structural characteristics and response of the buildings are analytically modelled. Some preliminary applications of the methodology are presented and discussed

  8. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J., II; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  9. Nonconservative current-induced forces: A physical interpretation

    PubMed Central

    Dundas, Daniel; Paxton, Anthony T; Horsfield, Andrew P

    2011-01-01

    Summary We give a physical interpretation of the recently demonstrated nonconservative nature of interatomic forces in current-carrying nanostructures. We start from the analytical expression for the curl of these forces, and evaluate it for a point defect in a current-carrying system. We obtain a general definition of the capacity of electrical current flow to exert a nonconservative force, and thus do net work around closed paths, by a formal noninvasive test procedure. Second, we show that the gain in atomic kinetic energy over time, generated by nonconservative current-induced forces, is equivalent to the uncompensated stimulated emission of directional phonons. This connection with electron–phonon interactions quantifies explicitly the intuitive notion that nonconservative forces work by angular momentum transfer. PMID:22259754

  10. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  11. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  12. SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records

    USGS Publications Warehouse

    Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.

    2013-01-01

    This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.

  13. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  14. Modeling And Economics Of Extreme Subduction Earthquakes: Two Case Studies

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Emerson, D.; Perea, N.; Moulinec, C.

    2008-05-01

    The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K

  15. ARMA models for earthquake ground motions. Seismic safety margins research program

    SciTech Connect

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.; Oliver, R. M.; Pister, K. S.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulating earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.

  16. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  17. Dynamic Models of Earthquakes and Tsunamis in the Santa Barbara Channel, California

    NASA Astrophysics Data System (ADS)

    Oglesby, David; Ryan, Kenny; Geist, Eric

    2016-04-01

    The Santa Barbara Channel and the adjacent Ventura Basin in California are the location of a number of large faults that extend offshore and could potentially produce earthquakes of magnitude greater than 7. The area is also home to hundreds of thousands of coastal residents. To properly evaluate the earthquake and tsunami hazard in this region requires the characterization of possible earthquake sources as well as the analysis of tsunami generation, propagation and inundation. Toward this end, we perform spontaneous dynamic earthquake rupture models of potential events on the Pitas Point/Lower Red Mountain faults, a linked offshore thrust fault system. Using the 3D finite element method, a realistic nonplanar fault geometry, and rate-state friction, we find that this fault system can produce an earthquake of up to magnitude 7.7, consistent with estimates from geological and paleoseismological studies. We use the final vertical ground deformation from our models as initial conditions for the generation and propagation of tsunamis to the shore, where we calculate inundation. We find that path and site effects lead to large tsunami amplitudes northward and eastward of the fault system, and in particular we find significant tsunami inundation in the low-lying cities of Ventura and Oxnard. The results illustrate the utility of dynamic earthquake modeling to produce physically plausible slip patterns and associated seafloor deformation that can be used for tsunami generation.

  18. An improved geodetic source model for the 1999 Mw 6.3 Chamoli earthquake, India

    NASA Astrophysics Data System (ADS)

    Xu, Wenbin; Bürgmann, Roland; Li, Zhiwei

    2016-04-01

    We present a distributed slip model for the 1999 Mw 6.3 Chamoli earthquake of north India using interferometric synthetic aperture radar (InSAR) data from both ascending and descending orbits and Bayesian estimation of confidence levels and trade-offs of the model geometry parameters. The results of fault-slip inversion in an elastic half-space show that the earthquake ruptured a 9°_{-2.2}^{+3.4} northeast-dipping plane with a maximum slip of ˜1 m. The fault plane is located at a depth of ˜15.9_{ - 3.0}^{ + 1.1} km and is ˜120 km north of the Main Frontal Thrust, implying that the rupture plane was on the northernmost detachment near the mid-crustal ramp of the Main Himalayan Thrust. The InSAR-determined moment is 3.35 × 1018 Nm with a shear modulus of 30 GPa, equivalent to Mw 6.3, which is smaller than the seismic moment estimates of Mw 6.4-6.6. Possible reasons for this discrepancy include the trade-off between moment and depth, uncertainties in seismic moment tensor components for shallow dip-slip earthquakes and the role of earth structure models in the inversions. The released seismic energy from recent earthquakes in the Garhwal region is far less than the accumulated strain energy since the 1803 Ms 7.5 earthquake, implying substantial hazard of future great earthquakes.

  19. Locating and Modeling Regional Earthquakes with Broadband Waveform Data

    NASA Astrophysics Data System (ADS)

    Tan, Y.; Zhu, L.; Helmberger, D.

    2003-12-01

    Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green­_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and

  20. Geodetically constrained models of viscoelastic stress transfer and earthquake triggering along the North Anatolian Fault

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Krastev, P. G.; Meade, B. J.

    2015-12-01

    Over the past 80 years, 8 MW>6.7 strike-slip earthquakes west of 40º longitude have ruptured the North Anatolian fault (NAF), largely from east to west. The series began with the 1939 Erzincan earthquake in eastern Turkey, and the most recent 1999 MW=7.4 Izmit earthquake extended the pattern of ruptures into the Sea of Marmara in western Turkey. The mean time between seismic events in this westward progression is 8.5±11 years (67% confidence interval), much greater than the timescale of seismic wave propagation (seconds to minutes). The delayed triggering of these earthquakes may be explained by the propagation of earthquake-generated diffusive viscoelastic fronts within the upper mantle that slowly increase the Coulomb failure stress change (CFS) at adjacent hypocenters. Here we develop three-dimensional stress transfer models with an elastic upper crust coupled to a viscoelastic Burgers rheology mantle. Both the Maxwell (ηM=1018.6-19.0 Pa•s) and Kelvin (ηK=1018.0-19.0 Pa•s) viscosities are constrained by viscoelastic block models that simultaneously explain geodetic observations of deformation before and after the 1999 Izmit earthquake. We combine this geodetically constrained rheological model with the observed sequence of large earthquakes since 1939 to calculate the time-evolution of CFS changes along the North Anatolian Fault due to viscoelastic stress transfer. Critical values of mean CFS at which the earthquakes in the eight decade sequence occur between -0.007 to 2.946 MPa and may exceed the magnitude of static CFS values by as much as 180%. The variability of four orders of magnitude in critical triggering stress may reflect along strike variations in NAF strength. Based on the median and mean of these critical stress values, we infer that the NAF strand in the northern Marmara Sea near Istanbul, which previously ruptured in 1509, may reach a critical stress level between 2015 and 2032.

  1. Dynamic scaling behaviors of linear fractal Langevin-type equation driven by nonconserved and conserved noise

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Xun, Zhi-Peng; Wu, Ling; Chen, Yi-Li; Xia, Hui; Hao, Da-Peng; Tang, Gang

    2016-06-01

    In order to study the effects of the microscopic details of fractal substrates on the scaling behavior of the growth model, a generalized linear fractal Langevin-type equation, ∂h / ∂t =(- 1) m + 1 ν∇ mzrw h (zrw is the dynamic exponent of random walk on substrates), driven by nonconserved and conserved noise is proposed and investigated theoretically employing scaling analysis. Corresponding dynamic scaling exponents are obtained.

  2. Source models of great earthquakes from ultra low-frequency normal mode data

    NASA Astrophysics Data System (ADS)

    Lentas, Konstantinos; Ferreira, Ana; Clévédé, Eric

    2014-05-01

    We present a new earthquake source inversion technique based on normal mode data for the simultaneous determination of the rupture duration, length and moment tensor of large earthquakes with unilateral rupture. We use ultra low-frequency (f < 1 mHz) normal mode spheroidal multiplets and the phases of split free oscillations, which are modelled using Higher Order Perturbation Theory (HOPT), taking into account the Earth's rotation, ellipticity and lateral heterogeneities. A Monte Carlo exploration of the model space is carried out, enabling the assessment of source parameter tradeoffs and uncertainties. We carry out synthetic tests for four different realistic artificial earthquakes with different faulting mechanisms and magnitudes (Mw 8.1-9.3) to investigate errors in the source inversions due to: (i) unmodelled 3-D Earth structure; (ii) noise in the data; (iii) uncertainties in spatio-temporal earthquake location; and, (iv) neglecting the source finiteness in point source moment tensor inversions. We find that unmodelled 3-D structure is the most serious source of errors for rupture duration and length determinations especially for the lowest magnitude artificial events. The errors in moment magnitude and fault mechanism are generally small, with the rake angle showing systematically larger errors (up to 20 degrees). We then carry out source inversions of five giant thrust earthquakes (Mw ≥ 8.5): (i) the 26 December 2004 Sumatra-Andaman earthquake; (ii) the 28 March 2005 Nias, Sumatra earthquake; (iii) the 12 September 2007 Bengkulu earthquake; (iv) the Tohoku, Japan earthquake of 11 March 2011; (v) the Maule, Chile earthquake of 27 February 2010; and (vi) the recent 24 May 2013 Mw 8.3 Okhotsk Sea, Russia, deep (607 km) earthquake. While finite source inversions for rupture length, duration, magnitude and fault mechanism are possible for the Sumatra-Andaman and Tohoku events, for all the other events their lower magnitudes do not allow stable inversions of mode

  3. Cross-cultural comparisons between the earthquake preparedness models of Taiwan and New Zealand.

    PubMed

    Jang, Li-Ju; Wang, Jieh-Jiuh; Paton, Douglas; Tsai, Ning-Yu

    2016-04-01

    Taiwan and New Zealand are both located in the Pacific Rim where 81 per cent of the world's largest earthquakes occur. Effective programmes for increasing people's preparedness for these hazards are essential. This paper tests the applicability of the community engagement theory of hazard preparedness in two distinct cultural contexts. Structural equation modelling analysis provides support for this theory. The paper suggests that the close fit between theory and data that is achieved by excluding trust supports the theoretical prediction that familiarity with a hazard negates the need to trust external sources. The results demonstrate that the hazard preparedness theory is applicable to communities that have previously experienced earthquakes and are therefore familiar with the associated hazards and the need for earthquake preparedness. The paper also argues that cross-cultural comparisons provide opportunities for collaborative research and learning as well as access to a wider range of potential earthquake risk management strategies. PMID:26282331

  4. The investigation of blind continental earthquake sources through analogue and numerical models

    NASA Astrophysics Data System (ADS)

    Bonini, L.; Toscani, G.; Seno, S.

    2012-04-01

    One of the most challenging topic in earthquake geology is to characterize the seismogenic sources, i.e. the potential causative faults of earthquakes. The main seismogenic layer is located in the upper brittle crust. Nevertheless it does not mean that a fault take up the whole schizosphere: i.e. from the brittle-plastic transition to the surface. Indeed, latest damaging earthquakes were generated by blind or "hidden" faults: 23 Oct. 2011, Van earthquake (Mw 7.1, Turkey); 3 Sep 2010, Darfield earthquake (Mw 7.1, New Zealand); 12 January 2010 Haiti earthquake (Mw 7.0); 6 April 2009 L'Aquila earthquake (Mw 6.3, Italy). Therefore understand how a fault grows and develops is a key question to evaluate the seismogenic potential of an area. Analogue model was used to understand kinematics and geometry of the geological structures since the beginning of the modern geology. On the other hand, numerical model develops much more during the last thirty years. Nowadays we can use these two methods working together providing mutual interactions. In the two-three most recent years we tried to use both numerical and analogue models to investigate the long-term and short-term evolution of a blind normal fault. To do this we improved the Analogue Model Laboratory of the University of Pavia with a laser scanner, a stepper motor and other high resolution tools in order to detect the distribution of the deformation mainly induced by blind faults. The goal of this kind of approach is to mimic the effects of the faults movements in a scaled model. We selected two seismogenic source cases: the causative fault of the 1908 Messina earthquake (Mw 7.1) and that of the 2009 L'Aquila earthquake (Mw 6.3). In the first case we investigate the long term evolution of this structure using a set of analogue models and afterwards a numerical model of our sandbox allow us to investigate stress and strain partitioning. In the second case we performed only an analogue model of short-term evolution of

  5. Bounding Ground Motions for Hayward Fault Scenario Earthquakes Using Suites of Stochastic Rupture Models

    NASA Astrophysics Data System (ADS)

    Rodgers, A. J.; Xie, X.; Petersson, A.

    2007-12-01

    The next major earthquake in the San Francisco Bay area is likely to occur on the Hayward-Rodgers Creek Fault system. Attention on the southern Hayward section is appropriate given the upcoming 140th anniversary of the 1868 M 7 rupture coinciding with the estimated recurrence interval. This presentation will describe ground motion simulations for large (M > 6.5) earthquakes on the Hayward Fault using a recently developed elastic finite difference code and high-performance computers at Lawrence Livermore National Laboratory. Our code easily reads the recent USGS 3D seismic velocity model of the Bay Area developed in 2005 and used for simulations of the 1906 San Francisco and 1989 Loma Prieta earthquakes. Previous work has shown that the USGS model performs very well when used to model intermediate period (4-33 seconds) ground motions from moderate (M ~ 4-5) earthquakes (Rodgers et al., 2008). Ground motions for large earthquakes are strongly controlled by the hypocenter location, spatial distribution of slip, rise time and directivity effects. These are factors that are impossible to predict in advance of a large earthquake and lead to large epistemic uncertainties in ground motion estimates for scenario earthquakes. To bound this uncertainty, we are performing suites of simulations of scenario events on the Hayward Fault using stochastic rupture models following the method of Liu et al. (Bull. Seism. Soc. Am., 96, 2118-2130, 2006). These rupture models have spatially variable slip, rupture velocity, rise time and rake constrained by characterization of inferred finite fault ruptures and expert opinion. Computed ground motions show variability due to the variability in rupture models and can be used to estimate the average and spread of ground motion measures at any particular site. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No.W-7405-Eng-48. This is

  6. Analysing Post-Seismic Deformation of Izmit Earthquake with Insar, Gnss and Coulomb Stress Modelling

    NASA Astrophysics Data System (ADS)

    Alac Barut, R.; Trinder, J.; Rizos, C.

    2016-06-01

    On August 17th 1999, a Mw 7.4 earthquake struck the city of Izmit in the north-west of Turkey. This event was one of the most devastating earthquakes of the twentieth century. The epicentre of the Izmit earthquake was on the North Anatolian Fault (NAF) which is one of the most active right-lateral strike-slip faults on earth. However, this earthquake offers an opportunity to study how strain is accommodated in an inter-segment region of a large strike slip fault. In order to determine the Izmit earthquake post-seismic effects, the authors modelled Coulomb stress changes of the aftershocks, as well as using the deformation measurement techniques of Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS). The authors have shown that InSAR and GNSS observations over a time period of three months after the earthquake combined with Coulomb Stress Change Modelling can explain the fault zone expansion, as well as the deformation of the northern region of the NAF. It was also found that there is a strong agreement between the InSAR and GNSS results for the post-seismic phases of investigation, with differences less than 2mm, and the standard deviation of the differences is less than 1mm.

  7. Time-predictable model applicability for earthquake occurrence in northeast India and vicinity

    NASA Astrophysics Data System (ADS)

    Panthi, A.; Shanker, D.; Singh, H. N.; Kumar, A.; Paudyal, H.

    2011-03-01

    Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986) for the period 1906-2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp) and not on the following mainshock (Mf). This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T) of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.

  8. Validation and modeling of earthquake strong ground motion using a composite source model

    NASA Astrophysics Data System (ADS)

    Zeng, Y.

    2001-12-01

    Zeng et al. (1994) have proposed a composite source model for synthetic strong ground motion prediction. In that model, the source is taken as a superposition of circular subevents with a constant stress drop. The number of subevents and their radius follows a power law distribution equivalent to the Gutenberg and Richter's magnitude-frequency relation for seismicity. The heterogeneous nature of the composite source model is characterized by its maximum subevent size and subevent stress drop. As rupture propagates through each subevent, it radiates a Brune's pulse or a Sato and Hirasawa's circular crack pulse. The method has been proved to be successful in generating realistic strong motion seismograms in comparison with observations from earthquakes in California, eastern US, Guerrero of Mexico, Turkey and India. The model has since been improved by including scattering waves from small scale heterogeneity structure of the earth, site specific ground motion prediction using weak motion site amplification, and nonlinear soil response using geotechnical engineering models. Last year, I have introduced an asymmetric circular rupture to improve the subevent source radiation and to provide a consistent rupture model between overall fault rupture process and its subevents. In this study, I revisit the Landers, Loma Prieta, Northridge, Imperial Valley and Kobe earthquakes using the improved source model. The results show that the improved subevent ruptures provide an improved effect of rupture directivity compared to our previous studies. Additional validation includes comparison of synthetic strong ground motions to the observed ground accelerations from the Chi-Chi, Taiwan and Izmit, Turkey earthquakes. Since the method has evolved considerably when it was first proposed, I will also compare results between each major modification of the model and demonstrate its backward compatibility to any of its early simulation procedures.

  9. Comparison of test and earthquake response modeling of a nuclear power plant containment building

    SciTech Connect

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1985-01-01

    The reactor building of a BWR plant was subjected to dynamic testing, a minor earthquake, and a strong earthquake at different times. Analytical models simulating each of these events were devised by previous investigators. A comparison of the characteristics of these models is made in this paper. The different modeling assumptions involved in the different simulation analyses restrict the validity of the models for general use and also narrow the comparison down to only a few modes. The dynamic tests successfully identified the first mode of the soil-structure system.

  10. Post Earthquake Investigation Of The Mw7.8 Haida Gwaii, Canada, Rupture Area And Constraints On Earthquake Source Models

    NASA Astrophysics Data System (ADS)

    Haeussler, P. J.; Witter, R. C.; Wang, K.

    2013-12-01

    The October 28, 2012 Mw 7.8 Haida Gwaii, British Columbia, earthquake was the second largest historical earthquake recorded in Canada. Earthquake seismology and GPS geodesy shows this was an underthrusting event, in agreement with prior studies that indicated oblique underthrusting of the Haida Gwaii by the Pacific plate. Coseismic deformation is poorly constrained by geodesy, with only six GPS sites and two tide gauge stations anywhere near the rupture area. In order to better constrain the coseismic deformation, we measured the upper limit of sessile intertidal organisms at 26 sites relative to sea level. We dominantly measured the positions of bladder weed (fucus distichus - 617 observations) and the common acorn barnacle (Balanus balanoides - 686 observations). Physical conditions control the upper limit of sessile intertidal organisms, so we tried to find the quietest water conditions, with steep, but not overhanging faces, where slosh from wave motion was minimized. We focused on the western side of the islands as rupture models indicated that the greatest displacement was there. However, we were also looking for calm water sites in bays located as close as possible to the often tumultuous Pacific Ocean. In addition, we made 322 measurements of sea level that will be used to develop a precise tidal model and to evaluate the position of the organisms with respect to a common sea level datum. We anticipate the resolution of the method will be about 20-30 cm. The sites were focused on the western side of the Haida Gwaii from Wells Bay on the south up to Otard Bay to the north, with 5 transects across strike. We also collected data at the town of Masset, which lies outside of the deformation zone of the earthquake. We observed dried and desiccated bands of fucus and barnacles at two sites on the western coast of southern Moresby Island (Gowgia Bay and Wells Bay). Gowgia Bay had the strongest evidence of uplift with fucus that was dried out and apparently dead. A

  11. Damage and the Gutenberg-Richter Law: from simple models to natural earthquake fault systems

    NASA Astrophysics Data System (ADS)

    Tiampo, K. F.; Klein, W.; Rundle, J. B.; Dominguez, R.; Serino, C.

    2010-12-01

    Natural earthquake fault systems are highly nonhomogeneous in space, where these inhomogeneities occur because the earth is made of a variety of materials which hold and dissipate stress differently. One way that the inhomogeneous nature of fault systems manifests itself is in the spatial patterns which emerge in seismicity graphs (Tiampo et al., 2002, 2007). Despite their inhomogeneous nature, real faults are often modeled as spatially homogeneous systems. One argument for this approach is that earthquake faults experience long range stress transfer, and if this range is longer than the length scales associated with the inhomogeneities of the system, the dynamics of the system may be unaffected by their presence. However, it is not clear that this is the case. In this work we study the scaling of an earthquake model that is a variation of the Olami-Feder-Christensen (OFC) model, in order to explore the effect of spatial inhomogeneities on earthquake-like systems when interaction ranges are long, but not necessarily longer than the distances associated with those inhomogeneities (Rundle and Jackson, 1977; Olami et al., 1988). For long ranges and without inhomogeneities, such models have been found to produce scaling similar to GR scaling found in real earthquake systems (Rundle and Klein, 1993). In the earthquake models discussed here, damage is distributed inhomogeneously throughout and the interaction ranges, while long, are not longer than all of the damage length scales. We find that the scaling depends not only on the amount of damage, but also on the spatial distribution of that damage. In addition, we study the behaviour of particular natural earthquake faults and the spatial and temporal variation of GR scaling in those systems, in order to compare them with various damage cases from the simulations.

  12. PAGER-CAT: A composite earthquake catalog for calibrating global fatality models

    USGS Publications Warehouse

    Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.

    2009-01-01

    We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are

  13. Seismic mountain building: Landslides associated with the 2008 Wenchuan earthquake in the context of a generalized model for earthquake volume balance

    NASA Astrophysics Data System (ADS)

    Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Parker, Robert N.; Hilton, Robert G.

    2014-04-01

    we assess earthquake volume balance and the growth of mountains in the context of a new landslide inventory for the Mw 7.9 Wenchuan earthquake in central China. Coseismic landslides were mapped from high-resolution remote imagery using an automated algorithm and manual delineation, which allow us to distinguish clustered landslides that can bias landslide volume calculations. Employing a power-law landslide area-volume relation, we find that the volume of landslide-associated mass wasting (˜2.8 + 0.9/-0.7 km3) is lower than previously estimated (˜5.7-15.2 km3) and comparable to the volume of rock uplift (˜2.6 ± 1.2 km3) during the Wenchuan earthquake. If fluvial evacuation removes landslide debris within the earthquake cycle, then the volume addition from coseismic uplift will be effectively offset by landslide erosion. If all earthquakes in the region followed this volume budget pattern, the efficient counteraction of coseismic rock uplift raises a fundamental question about how earthquakes build mountainous topography. To provide a framework for addressing this question, we explore a group of scaling relations to assess earthquake volume balance. We predict coseismic uplift volumes for thrust-fault earthquakes based on geophysical models for coseismic surface deformation and relations between fault rupture parameters and moment magnitude, Mw. By coupling this scaling relation with landslide volume-Mw scaling, we obtain an earthquake volume balance relation in terms of moment magnitude Mw, which is consistent with the revised Wenchuan landslide volumes and observations from the 1999 Chi-Chi earthquake in Taiwan. Incorporating the Gutenburg-Richter frequency-Mw relation, we use this volume balance to derive an analytical expression for crustal thickening from coseismic deformation based on an index of seismic intensity over a defined area. This model yields reasonable rates of crustal thickening from coseismic deformation (e.g., ˜0.1-0.5 km Ma-1 in

  14. Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie

    2014-05-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.

  15. An exact renormalization model for earthquakes and material failure: Statics and dynamics

    SciTech Connect

    Newman, W.I.; Gabrielov, A.M.; Durand, T.A.; Phoenix, S.L.; Turcotte, D.L.

    1993-09-12

    Earthquake events are well-known to prams a variety of empirical scaling laws. Accordingly, renormalization methods offer some hope for understanding why earthquake statistics behave in a similar way over orders of magnitude of energy. We review the progress made in the use of renormalization methods in approaching the earthquake problem. In particular, earthquake events have been modeled by previous investigators as hierarchically organized bundles of fibers with equal load sharing. We consider by computational and analytic means the failure properties of such bundles of fibers, a problem that may be treated exactly by renormalization methods. We show, independent of the specific properties of an individual fiber, that the stress and time thresholds for failure of fiber bundles obey universal, albeit different, staling laws with respect to the size of the bundles. The application of these results to fracture processes in earthquake events and in engineering materials helps to provide insight into some of the observed patterns and scaling-in particular, the apparent weakening of earthquake faults and composite materials with respect to size, and the apparent emergence of relatively well-defined stresses and times when failure is seemingly assured.

  16. The 2007 Bengkulu earthquake, its rupture model and implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Ambikapathy, A.; Catherine, J. K.; Gahalaut, V. K.; Narsaiah, M.; Bansal, A.; Mahesh, P.

    2010-08-01

    The 12 September 2007 great Bengkulu earthquake ( M w 8.4) occurred on the west coast of Sumatra about 130 km SW of Bengkulu. The earthquake was followed by two strong aftershocks of M w 7.9 and 7.0. We estimate coseismic offsets due to the mainshock, derived from near-field Global Positioning System (GPS) measurements from nine continuous SuGAr sites operated by the California Institute of Technology (Caltech) group. Using a forward modelling approach, we estimated slip distribution on the causative rupture of the 2007 Bengkulu earthquake and found two patches of large slip, one located north of the mainshock epicenter and the other, under the Pagai Islands. Both patches of large slip on the rupture occurred under the island belt and shallow water. Thus, despite its great magnitude, this earthquake did not generate a major tsunami. Further, we suggest that the occurrence of great earthquakes in the subduction zone on either side of the Siberut Island region, might have led to the increase in static stress in the region, where the last great earthquake occurred in 1797 and where there is evidence of strain accumulation.

  17. Modelling Psychological Responses to the Great East Japan Earthquake and Nuclear Incident

    PubMed Central

    Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.

    2012-01-01

    The Great East Japan (Tōhoku/Kanto) earthquake of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380

  18. Modeling the 1992 Landers Earthquake with a Rate and State Friction Model.

    NASA Astrophysics Data System (ADS)

    Mohammedi, H.; Madariaga, R.; Perrin, G.

    2002-12-01

    We study rupture propagation in realistic earthquake models under rate and state dependent friction and we apply it to the modeling of the 28 June 1992, Landers earthquake. In our simulations we use a modified version of rate and state proposed by Perrin, Rice and Zheng, the so called PRZ law. Full inversion with PRZ is not yet possible because of the much higher numerical cost of modeling a fault under rate and state than with slip weakening friction laws (SW). Also PRZ has a larger number of independent parameters than slip weakening. We obtain reasonable initial models through the use of the ratio κ between available strain energy and energy relase rate. Because in PRZ friction there are more parameters than in SW we have not yet been able to identify all relevant non-dimensional numbers that control rupture in this model, but a very important one is a logarithmic map that controls whether instable slip may occur or not. This map has the form log ˙ D/v0 = λ ˙ D/v0, where λ is a nondimensional number akin to κ . It includes the parameters of the friction law and the characteristic length of the initial stress, velocity or state fields. ˙ D is slip velocity and v0 a reference speed that defines the initial stress field. Using the results of dynamic inversion from Peyrat et al, we find reasonable rupture models for the initiation of the Landers earthquake. The slip weakening distance in rate and state Dc, as defined by Bizarri and Cocco, is of the order of a few tens of cm. Dc is determined from L, the relaxation length in rate and state, as a subproduct of the logarithmic map cited above.

  19. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  20. Near-real Time Interpretation of Micro-earthquake Data for Reservoir Modeling

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Boyle, K.; Bonner, B. P.

    2009-12-01

    Geothermal, CO2 sequestration, oil and gas reservoir modeling depends on identifying reservoir geology, fractures, fluids, and permeable zones. We present an approach that utilizes passive seismic methods to update reservoir models in near-real time. Recent developments of inexpensive micro-earthquake recorders and sensors, high performance desktop computer capabilities, high resolution tomographic imaging techniques, high resolution micro-earthquake location programs, and new developments in interpretation can significantly improve reservoir exploration, exploitation, and management at reasonable costs in time and dollars. We have developed a rapid and inexpensive reservoir modeling package based on interpretation of micro-earthquake recordings analysis. The package includes an automated P- and S-wave picker, high-resolution double-difference earthquake locations, 3-D tomographic inversions for P- and S-wave velocity structure and attenuation (Qp and Qs) structure, and seismic moments and stress drops. We utilize a three-dimensional visualization program to examine spatial associations and correlations of reservoir properties, and apply rock physics (including effective medium theories) in interpretation. Modeling is typically in the depth range of reservoirs of interest, usually surface to 5 Km depth, and depends upon sufficient numbers of earthquakes, usually 100 - 500 events. This can be updated regularly to monitor temporal changes. We demonstrate this package with The Geysers and Salton Sea geothermal fields.

  1. "Slimplectic" Integrators: Variational Integrators for General Nonconservative Systems

    NASA Astrophysics Data System (ADS)

    Tsang, David; Galley, Chad R.; Stein, Leo C.; Turner, Alec

    2015-08-01

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.

  2. Earthquake Hazard and Risk in Sub-Saharan Africa: current status of the Global Earthquake model (GEM) initiative in the region

    NASA Astrophysics Data System (ADS)

    Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray

    2013-04-01

    Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.

  3. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  4. Breit interaction and parity nonconservation in many-electron atoms

    SciTech Connect

    Dzuba, V. A.; Flambaum, V. V.; Safronova, M. S.

    2006-02-15

    We present accurate ab initio nonperturbative calculations of the Breit correction to the parity nonconserving (PNC) amplitudes of the 6s-7s and 6s-5d{sub 3/2} transitions in Cs, 7s-8s and 7s-6d{sub 3/2} transitions in Fr, 6s-5d{sub 3/2} transition in Ba{sup +}, 7s-6d{sub 3/2} transition in Ra{sup +}, and 6p{sub 1/2}-6p{sub 3}/{sub 2} transition in Tl. The results for the 6s-7s transition in Cs and 7s-8s transition in Fr are in good agreement with other calculations. We demonstrate that higher-orders many-body corrections to the Breit interaction are especially important for the s-d PNC amplitudes. We confirm good agreement of the PNC measurements for cesium and thallium with the standard model.

  5. Rapid Assessment of Earthquakes with Radar and Optical Geodetic Imaging and Finite Fault Models (Invited)

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Sladen, A.; Simons, M.; Rosen, P. A.; Yun, S.; Li, Z.; Avouac, J.; Leprince, S.

    2010-12-01

    Earthquake responders need to know where the earthquake has caused damage and what is the likely intensity of damage. The earliest information comes from global and regional seismic networks, which provide the magnitude and locations of the main earthquake hypocenter and moment tensor centroid and also the locations of aftershocks. Location accuracy depends on the availability of seismic data close to the earthquake source. Finite fault models of the earthquake slip can be derived from analysis of seismic waveforms alone, but the results can have large errors in the location of the fault ruptures and spatial distribution of slip, which are critical for estimating the distribution of shaking and damage. Geodetic measurements of ground displacements with GPS, LiDAR, or radar and optical imagery provide key spatial constraints on the location of the fault ruptures and distribution of slip. Here we describe the analysis of interferometric synthetic aperture radar (InSAR) and sub-pixel correlation (or pixel offset tracking) of radar and optical imagery to measure ground coseismic displacements for recent large earthquakes, and lessons learned for rapid assessment of future events. These geodetic imaging techniques have been applied to the 2010 Leogane, Haiti; 2010 Maule, Chile; 2010 Baja California, Mexico; 2008 Wenchuan, China; 2007 Tocopilla, Chile; 2007 Pisco, Peru; 2005 Kashmir; and 2003 Bam, Iran earthquakes, using data from ESA Envisat ASAR, JAXA ALOS PALSAR, NASA Terra ASTER and CNES SPOT5 satellite instruments and the NASA/JPL UAVSAR airborne system. For these events, the geodetic data provided unique information on the location of the fault or faults that ruptured and the distribution of slip that was not available from the seismic data and allowed the creation of accurate finite fault source models. In many of these cases, the fault ruptures were on previously unknown faults or faults not believed to be at high risk of earthquakes, so the area and degree of

  6. Interevent times in a new alarm-based earthquake forecasting model

    NASA Astrophysics Data System (ADS)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  7. Seismic hazard assessment for Myanmar: Earthquake model database, ground-motion scenarios, and probabilistic assessments

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.

    2015-12-01

    We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.

  8. Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide

    USGS Publications Warehouse

    Miles, Scott B.; Keefer, David K.

    2007-01-01

    This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.

  9. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  10. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  11. Earthquake vulnerability and risk modeling for the area of Greater Cairo, Egypt

    NASA Astrophysics Data System (ADS)

    Tyagunov, S.; Abdel-Rahman, K.; El-Hady, S.; El-Ela Mohamed, A.; Stempniewski, L.; Liesch, T.; Zschau, J.

    2009-04-01

    Egypt is a country of low-to-moderate earthquake hazard. However, the earthquake risk potential (in terms of both probable economic and human losses) is rather high. Population of Egypt (according to the Central Agency for Public Mobilisation and Statistics - CAPMAS) is about 80 million. At the same time the distribution of the population in the country is far from uniform. In particular, the area of Greater Cairo attracts migrants from the whole country and the metropolitan area faces the problem of unplanned urbanization. Due to the high density of population and vulnerability of the existing building stock the potential for earthquake damage and loss in the area is a problem of great concern. The area under study covers 43 administrative districts of Greater Cairo (including the City of Cairo, El-Giza and Shubra El-Kheima), where field investigations were conducted aiming at identifying representative building types and assessing their seismic vulnerability. On the base of collected information, combining the findings of the field investigations in different districts with available statistical data about the distribution of buildings in the districts, we constructed vulnerability composition models (in terms of the vulnerability classes of the European Macroseismic Scale, EMS-98) for all the considered districts of Greater Cairo. The vulnerability models are applicable for analysis of potential damage and losses in case of occurring damaging earthquakes in the region, including zonation of the seismic risk in the area, generation of probable earthquake scenarios and rapid damage and loss assessment for the purposes of emergency management.

  12. How Fault Geometry Affects Dynamic Rupture Models of Earthquakes in San Gorgonio Pass, CA

    NASA Astrophysics Data System (ADS)

    Tarnowski, J. M.; Oglesby, D. D.; Cooke, M. L.; Kyriakopoulos, C.

    2015-12-01

    We use 3D dynamic finite element models to investigate potential rupture paths of earthquakes propagating along faults in the western San Gorgonio Pass (SGP) region of California. The SGP is a structurally complex area along the southern California portion of the San Andreas fault system (SAF). It has long been suspected that this structural knot, which consists of the intersection of various non-planar strike-slip and thrust fault segments, may inhibit earthquake rupture propagation between the San Bernardino and Banning strands of the SAF. The above condition may limit the size of potential earthquakes in the region. Our focus is on the San Bernardino strand of the SAF and the San Gorgonio Pass Fault zone, where the fault connectivity is not well constrained. We use the finite element code FaultMod (Barall, 2009) to investigate how fault connectivity, nucleation location, and initial stresses influence rupture propagation and ground motion, including the likelihood of through-going rupture in this region. Preliminary models indicate that earthquakes that nucleate on the San Bernardino strand and propagate southward do not easily transfer rupture to the thrust faults of the San Gorgonio Pass fault zone. However, under certain assumptions, earthquakes that nucleate along the San Gorgonio Pass fault zone can transfer rupture to the San Bernardino strand.

  13. Modeling a Wide Spectrum of Fault Slip Behavior in Cascadia With the Earthquake Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Richards-Dinger, K. B.; Dieterich, J. H.

    2014-12-01

    Through the judicious use of approximations, earthquake simulators hope to accurately modelthe evolution of fault slip over long time periods (tens of thousands to hundreds ofthousands of years) in complicated regional- to plate-boundary-scale systems of faults. RSQSim is one such simulator which, through its use of an approximate form of rate- andstate-dependent friction, is able to capture the observed short-term power-law clusteringbehavior of earthquakes as well as model the two dominant obeserved modes of non-seismicslip: steady creep and slow slip events (SSEs). The creeping sections of the fault systemare modeled as always at steady-state such that the slip-speed is a simple function of theapplied stresses, while SSE-generating sections use (an approximate form of) the mechanismof Shibazaki and Iio (2003). The work we will present here on the Cascadian subduction system is part of a larger projectto perform unified simulations of the entire western US plate boundary region. In it we userealistic plate interface (and upper-plate fault system) geometries and distributions offrictional properties to address issues such as: the relationship between the short-termphenomena of earthquake triggering and clustering and the long-term recurrence of largeearthquakes implied by steady tectonic forcing; the interaction between fault sections withdifferent modes of slip prior to and in response to earthquakes (specifically includingpossible iteractions between SSEs and large subduction earthquakes); interactions betweenthe main subduction thrust and upper plate faults; and the effects of quenched versusdynamical heterogeneities on rupture processes.

  14. Bounded solutions for nonconserving-parity pseudoscalar potentials

    SciTech Connect

    Castro, Antonio S. de; Malheiro, Manuel; Lisboa, Ronai

    2004-12-02

    The Dirac equation is analyzed for nonconserving-parity pseudoscalar radial potentials in 3+1 dimensions. It is shown that despite the nonconservation of parity this general problem can be reduced to a Sturm-Liouville problem of nonrelativistic fermions in spherically symmetric effective potentials. The searching for bounded solutions is done for the power-law and Yukawa potentials. The use of the methodology of effective potentials allow us to conclude that the existence of bound-state solutions depends whether the potential leads to a definite effective potential-well structure or to an effective potential less singular than -1/4r2.

  15. Gutenberg-Richter and characteristic earthquake behavior in Simple Models of Heterogeneous Faults

    NASA Astrophysics Data System (ADS)

    Dahmen, K. A.; Fisher, D. S.; Ben-Zion, Y.; Ertas, D.; Ramanathan, S.

    2001-05-01

    The statistics of earthquakes has been a subject of research for a long time. One spectacular feature is the wide range of observed earthquake sizes, spanning over ten orders of magnitude. Gutenberg and Richter found that the size distribution of regional earthquakes follows a power law over the entire range of observed events. Recently enough data has been collected to be able to extract statistics on individual narrow earthquake fault zones. Wesnousky and coworkers found that fault zones with highly irregular geometry, such as the San Jacinto fault in California, which has many offsets and branches, display universal Gutenberg-Richter type power law statistics over the entire range of observed magnitudes. On the other hand, the available data show that faults with more regular geometry (presumably generated progressively with increasing cumulative slip), such as the San Andreas fault in California, display power law distributions only for small events, which occur between approximately periodically recurring events of a much larger characteristic size, which rupture the entire fault. There are practically no earthquakes of intermediate magnitudes observed in these faults. Two important questions emerge immediately: (1) Why does one find earthquakes of all sizes even on a single earthquake fault? (One might have expected for example a typical size with small variations instead.) (2) Why do not all fault zones display the same distribution of earthquake magnitudes, but rather separate into the two classes described~? We have studied simple models for ruptures along a heterogeneous earthquake fault zone, focusing on the interplay between the roles of disorder and dynamical effects. A class of models were found to operate naturally at a critical point whose properties yield power law scaling of earthquake statistics. The analytically computed critical exponent for the power law distribution lies well within the error bars of the observed Gutenberg-Richter power law

  16. GEM1: First-year modeling and IT activities for the Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Anderson, G.; Giardini, D.; Wiemer, S.

    2009-04-01

    GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global

  17. Toward a Global Model for Predicting Earthquake-Induced Landslides in Near-Real Time

    NASA Astrophysics Data System (ADS)

    Nowicki, M. A.; Wald, D. J.; Hamburger, M. W.; Hearne, M.; Thompson, E.

    2013-12-01

    We present a newly developed statistical model for estimating the distribution of earthquake-triggered landslides in near-real time, which is designed for use in the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) and ShakeCast systems. We use standardized estimates of ground shaking from the USGS ShakeMap Atlas 2.0 to develop an empirical landslide probability model by combining shaking estimates with broadly available landslide susceptibility proxies, including topographic slope, surface geology, and climatic parameters. While the initial model was based on four earthquakes for which digitally mapped landslide inventories and well constrained ShakeMaps are available--the Guatemala (1976), Northridge, California (1994), Chi-Chi, Taiwan (1999), and Wenchuan, China (2008) earthquakes, our improved model includes observations from approximately ten other events from a variety of tectonic and geomorphic settings for which we have obtained landslide inventories. Using logistic regression, this database is used to build a predictive model of the probability of landslide occurrence. We assess the performance of the regression model using statistical goodness-of-fit metrics to determine which combination of the tested landslide proxies provides the optimum prediction of observed landslides while minimizing ';false alarms' in non-landslide zones. Our initial results indicate strong correlations with peak ground acceleration and maximum slope, and weaker correlations with surface geological and soil wetness proxies. In terms of the original four events included, the global model predicts landslides most accurately when applied to the Wenchuan and Chi-Chi events, and less accurately when applied to the Northridge and Guatemala datasets. Combined with near-real time ShakeMaps, the model can be used to make generalized predictions of whether or not landslides are likely to occur (and if so, where) for future earthquakes around the globe, and these estimates

  18. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.

    2008-12-01

    Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.

  19. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  20. The Negative Binomial Distribution as a Renewal Model for the Recurrence of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Tejedor, Alejandro; Gómez, Javier B.; Pacheco, Amalio F.

    2015-01-01

    The negative binomial distribution is presented as the waiting time distribution of a cyclic Markov model. This cycle simulates the seismic cycle in a fault. As an example, this model, which can describe recurrences with aperiodicities between 0 and 0.5, is used to fit the Parkfield, California earthquake series in the San Andreas Fault. The performance of the model in the forecasting is expressed in terms of error diagrams and compared with other recurrence models from literature.

  1. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  2. Joint earthquake source inversions using seismo-geodesy and 3-D earth models

    NASA Astrophysics Data System (ADS)

    Weston, J.; Ferreira, A. M. G.; Funning, G. J.

    2014-08-01

    A joint earthquake source inversion technique is presented that uses InSAR and long-period teleseismic data, and, for the first time, takes 3-D Earth structure into account when modelling seismic surface and body waves. Ten average source parameters (Moment, latitude, longitude, depth, strike, dip, rake, length, width and slip) are estimated; hence, the technique is potentially useful for rapid source inversions of moderate magnitude earthquakes using multiple data sets. Unwrapped interferograms and long-period seismic data are jointly inverted for the location, fault geometry and seismic moment, using a hybrid downhill Powell-Monte Carlo algorithm. While the InSAR data are modelled assuming a rectangular dislocation in a homogeneous half-space, seismic data are modelled using the spectral element method for a 3-D earth model. The effect of noise and lateral heterogeneity on the inversions is investigated by carrying out realistic synthetic tests for various earthquakes with different faulting mechanisms and magnitude (Mw 6.0-6.6). Synthetic tests highlight the improvement in the constraint of fault geometry (strike, dip and rake) and moment when InSAR and seismic data are combined. Tests comparing the effect of using a 1-D or 3-D earth model show that long-period surface waves are more sensitive than long-period body waves to the change in earth model. Incorrect source parameters, particularly incorrect fault dip angles, can compensate for systematic errors in the assumed Earth structure, leading to an acceptable data fit despite large discrepancies in source parameters. Three real earthquakes are also investigated: Eureka Valley, California (1993 May 17, Mw 6.0), Aiquile, Bolivia (1998 February 22, Mw 6.6) and Zarand, Iran (2005 May 22, Mw 6.5). These events are located in different tectonic environments and show large discrepancies between InSAR and seismically determined source models. Despite the 40-50 km discrepancies in location between previous geodetic and

  3. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  4. Global database of InSAR earthquake source models: A tool for independently assessing seismic catalogues

    NASA Astrophysics Data System (ADS)

    Ferreira, A. M.; Weston, J. M.; Funning, G. J.

    2011-12-01

    Earthquake source models are routinely determined using seismic data and are reported in many seismic catalogues, such as the Global Centroid Moment Tensor (GCMT) catalogue. Recent advances in space geodesy, such as InSAR, have enabled the estimation of earthquake source parameters from the measurement of deformation of the Earth's surface, independently of seismic information. The absence of an earthquake catalogue based on geodetic data prompted the compilation of a large InSAR database of CMT parameters from the literature (Weston et al., 2011, hereafter referred to as the ICMT database). Information provided in published InSAR studies of earthquakes is used to obtain earthquake source parameters, and equivalent CMT parameters. Multiple studies of the same earthquake are included in the database, as they are valuable to assess uncertainties in source models. Here, source parameters for 70 earthquakes in an updated version of the ICMT database are compared with those reported in global and regional seismic catalogues. There is overall good agreement between parameters, particularly in fault strike, dip and rake. However, InSAR centroid depths are systematically shallower (5-10 km) than those in the EHB catalogue, but this is reduced for depths from inversions of InSAR data that use a layered half-space. Estimates of the seismic moment generally agree well between the two datasets, but for thrust earthquakes there is a slight tendency for the InSAR-determined seismic moment to be larger. Centroid locations from the ICMT database are in good agreement with those from regional seismic catalogues with a median distance of ~6 km between them, which is smaller than for comparisons with global catalogues (17.0 km and 9.2 km for the GCMT and ISC catalogues, respectively). Systematic tests of GCMT-like inversions have shown that similar mislocations occur for several different degree 20 Earth models (Ferreira et al., 2011), suggesting that higher resolution Earth models

  5. Earthquake triggering in the peri-adriatic regions induced by stress diffusion: insights from numerical modelling

    NASA Astrophysics Data System (ADS)

    D'Onza, F.; Viti, M.; Mantovani, E.; Albarello, D.

    2003-04-01

    EARTHQUAKE TRIGGERING IN THE PERI-ADRIATIC REGIONS INDUCED BY STRESS DIFFUSION: INSIGHTS FROM NUMERICAL MODELLING F. D’Onza (1), M. Viti (1), E. Mantovani (1) and D. Albarello (1) (1) Dept. of Earth Sciences, University of Siena - Italy (donza@unisi.it/Fax:+39-0577-233820) Significant evidence suggests that major earthquakes in the peri-Adriatic Balkan zones may influence the seismicity pattern in the Italian area. In particular, a seismic correlation has been recognized between major earthquakes in the southern Dinaric belt and those in southern Italy. It is widely recognized that such kind of regularities may be an effect of postseismic relaxation triggered by strong earthquakes. In this note, we describe an attempt to quantitatively investigate, by numerical modelling, the reliability of the above interpretation. In particular, we have explored the possibility to explain the last example of the presumed correlation (triggering event: April, 1979 Montenegro earthquake, MS=6.7; induced event: November, 1980 Irpinia event, MS=6.9) as an effect of postseismic relaxation through the Adriatic plate. The triggering event is modelled by imposing a sudden dislocation in the Montenegro seismic fault, taking into account the fault parameters (length and average slip) recognized from seismological observations. The perturbation induced by the seismic source in the neighbouring lithosphere is obtained by the Elsasser diffusion equation for an elastic lithosphere coupled with a viscous asthenosphere. The results obtained by numerical experiments indicate that the strain regime induced by the Montenegro event in southern Italy is compatible with the tensional strain field observed in this last zone, that the amplitude of the induced strain is significantly higher than that induced by Earth tides and that this amplitude is comparable with the strain perturbation recognized as responsible for earthquake triggering. The time delay between the triggering and the induced

  6. Modeling of the coseismic electromagnetic fields observed during the 2004 Mw 6.0 Parkfield earthquake

    NASA Astrophysics Data System (ADS)

    Gao, Yongxin; Harris, Jerry M.; Wen, Jian; Huang, Yihe; Twardzik, Cedric; Chen, Xiaofei; Hu, Hengshan

    2016-01-01

    The coseismic electromagnetic signals observed during the 2004 Mw 6 Parkfield earthquake are simulated using electrokinetic theory. By using a finite fault source model obtained via kinematic inversion, we calculate the electric and magnetic responses to the earthquake rupture. The result shows that the synthetic electric signals agree with the observed data for both amplitude and wave shape, especially for early portions of the records (first 9 s) after the earthquake, supporting the electrokinetic effect as the reasonable mechanism for the generation of the coseismic electric fields. More work is needed to explain the magnetic fields and the later portions of the electric fields. Analysis shows that the coseismic electromagnetic (EM) signals are sensitive to both the material properties at the location of the EM sensors and the electrochemical heterogeneity in the vicinity of the EM sensors and can be used to characterize the underground electrochemical properties.

  7. Precursory measure of interoccurrence time associated with large earthquakes in the Burridge-Knopoff model

    SciTech Connect

    Hasumi, Tomohiro

    2008-11-13

    We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

  8. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  9. A model for earthquakes near Palisades Reservoir, southeast Idaho

    USGS Publications Warehouse

    Schleicher, David

    1975-01-01

    The Palisades Reservoir seems to be triggering earthquakes: epicenters are concentrated near the reservoir, and quakes are concentrated in spring, when the reservoir level is highest or is rising most rapidly, and in fall, when the level is lowest. Both spring and fall quakes appear to be triggered by minor local stresses superposed on regional tectonic stresses; faulting is postulated to occur when the effective normal stress across a fault is decreased by a local increase in pore-fluid pressure. The spring quakes tend to occur when the reservoir level suddenly rises: increased pore pressure pushes apart the walls of the graben flooded by the reservoir, thus decreasing the effective normal stress across faults in the graben. The fall quakes tend to occur when the reservoir level is lowest: water that gradually infiltrated poorly permeable (fault-gouge?) zones during high reservoir stands is then under anomalously high pressure, which decreases the effective normal stress across faults in the poorly permeable zones.

  10. Steady-state statistical mechanics of model and real earthquakes (Invited)

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Naylor, M.

    2010-12-01

    We derive an analytical expression for entropy production in earthquake populations based on Dewar’s formulation, including flux (tectonic forcing) and source (earthquake population) terms, and apply it to the Olami-Feder-Christensen (OFC) numerical model for earthquake dynamics. Assuming the commonly-observed power-law rheology between driving stress and remote strain rate, we test the hypothesis that maximum entropy production (MEP) is a thermodynamic driver for self-organized ‘criticality’ (SOC) in the model. MEP occurs when the global elastic strain is near, but strictly sub-critical, with small relative fluctuations in macroscopic strain energy expressed by a low seismic efficiency, and broad-bandwidth power-law scaling of frequency and rupture area. These phenomena, all as observed in natural earthquake populations, are hallmarks of the broad conceptual definition of SOC, which to date has often in practice included self-organizing systems in a near but strictly sub-critical state. In contrast the precise critical point represents a state of minimum entropy production in the model. In the MEP state the strain field retains some memory of past events, expressed as coherent ‘domains’, implying a degree of predictability, albeit strongly limited in practice by the proximity to criticality, our inability to map the stress field at an equivalent resolution to the numerical model, and finite temporal sampling effects in real data.

  11. Neural network models for earthquake magnitude prediction using multiple seismicity indicators.

    PubMed

    Panakkat, Ashif; Adeli, Hojjat

    2007-02-01

    Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region. PMID:17393560

  12. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  13. Nonconservative current-driven dynamics: beyond the nanoscale.

    PubMed

    Cunningham, Brian; Todorov, Tchavdar N; Dundas, Daniel

    2015-01-01

    Long metallic nanowires combine crucial factors for nonconservative current-driven atomic motion. These systems have degenerate vibrational frequencies, clustered about a Kohn anomaly in the dispersion relation, that can couple under current to form nonequilibrium modes of motion growing exponentially in time. Such motion is made possible by nonconservative current-induced forces on atoms, and we refer to it generically as the waterwheel effect. Here the connection between the waterwheel effect and the stimulated directional emission of phonons propagating along the electron flow is discussed in an intuitive manner. Nonadiabatic molecular dynamics show that waterwheel modes self-regulate by reducing the current and by populating modes in nearby frequency, leading to a dynamical steady state in which nonconservative forces are counter-balanced by the electronic friction. The waterwheel effect can be described by an appropriate effective nonequilibrium dynamical response matrix. We show that the current-induced parts of this matrix in metallic systems are long-ranged, especially at low bias. This nonlocality is essential for the characterisation of nonconservative atomic dynamics under current beyond the nanoscale. PMID:26665086

  14. Nonconservative current-driven dynamics: beyond the nanoscale

    PubMed Central

    Todorov, Tchavdar N; Dundas, Daniel

    2015-01-01

    Summary Long metallic nanowires combine crucial factors for nonconservative current-driven atomic motion. These systems have degenerate vibrational frequencies, clustered about a Kohn anomaly in the dispersion relation, that can couple under current to form nonequilibrium modes of motion growing exponentially in time. Such motion is made possible by nonconservative current-induced forces on atoms, and we refer to it generically as the waterwheel effect. Here the connection between the waterwheel effect and the stimulated directional emission of phonons propagating along the electron flow is discussed in an intuitive manner. Nonadiabatic molecular dynamics show that waterwheel modes self-regulate by reducing the current and by populating modes in nearby frequency, leading to a dynamical steady state in which nonconservative forces are counter-balanced by the electronic friction. The waterwheel effect can be described by an appropriate effective nonequilibrium dynamical response matrix. We show that the current-induced parts of this matrix in metallic systems are long-ranged, especially at low bias. This nonlocality is essential for the characterisation of nonconservative atomic dynamics under current beyond the nanoscale. PMID:26665086

  15. Quasi-hidden Markov model and its applications in cluster analysis of earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Wu, Zhengxiao

    2011-12-01

    We identify a broad class of models, quasi-hidden Markov models (QHMMs), which include hidden Markov models (HMMs) as special cases. Applying the QHMM framework, this paper studies how an earthquake cluster propagates statistically. Two QHMMs are used to describe two different propagating patterns. The "mother-and-kids" model regards the first shock in an earthquake cluster as "mother" and the aftershocks as "kids," which occur in a neighborhood centered by the mother. In the "domino" model, however, the next aftershock strikes in a neighborhood centered by the most recent previous earthquake in the cluster, and therefore aftershocks act like dominoes. As the likelihood of QHMMs can be efficiently computed via the forward algorithm, likelihood-based model selection criteria can be calculated to compare these two models. We demonstrate this procedure using data from the central New Zealand region. For this data set, the mother-and-kids model yields a higher likelihood as well as smaller AIC and BIC. In other words, in the aforementioned area the next aftershock is more likely to occur near the first shock than near the latest aftershock in the cluster. This provides an answer, though not entirely satisfactorily, to the question "where will the next aftershock be?". The asymptotic consistency of the model selection procedure in the paper is duly established, namely that, when the number of the observations goes to infinity, with probability one the procedure picks out the model with the smaller deviation from the true model (in terms of relative entropy rate).

  16. Modelling of Strong Ground Motions from 1991 Uttarkashi, India, Earthquake Using a Hybrid Technique

    NASA Astrophysics Data System (ADS)

    Kumar, Dinesh; Teotia, S. S.; Sriram, V.

    2011-10-01

    We present a simple and efficient hybrid technique for simulating earthquake strong ground motion. This procedure is the combination of the techniques of envelope function (M idorikawa et al. Tectonophysics 218:287-295, 1993) and composite source model (Z eng et al. Geophys Res Lett 21:725-728, 1994). The first step of the technique is based on the construction of the envelope function of the large earthquake by superposition of envelope functions for smaller earthquakes. The smaller earthquakes (sub-events) of varying sizes are distributed randomly, instead of uniform distribution of same size sub-events, on the fault plane. The accelerogram of large event is then obtained by combining the envelope function with a band-limited white noise. The low-cut frequency of the band-limited white noise is chosen to correspond to the corner frequency for the target earthquake magnitude and the high-cut to the Boore's f max or a desired frequency for the simulation. Below the low-cut frequency, the fall-off slope is 2 in accordance with the ω2 earthquake source model. The technique requires the parameters such as fault area, orientation of the fault, hypocenter, size of the sub-events, stress drop, rupture velocity, duration, source-site distance and attenuation parameter. The fidelity of the technique has been demonstrated by successful modeling of the 1991 Uttarkashi, Himalaya earthquake (Ms 7). The acceptable locations of the sub-events on the fault plane have been determined using a genetic algorithm. The main characteristics of the simulated accelerograms, comprised of the duration of strong ground shaking, peak ground acceleration and Fourier and response spectra, are, in general, in good agreement with those observed at most of the sites. At some of the sites the simulated accelerograms differ from observed ones by a factor of 2-3. The local site geology and topography may cause such a difference, as these effects have not been considered in the present technique. The

  17. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  18. Simulation of the Burridge-Knopoff Model of Earthquakes with Variable Range Stress Transfer

    SciTech Connect

    Xia Junchao; Gould, Harvey; Klein, W.; Rundle, J.B.

    2005-12-09

    Simple models of earthquake faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton models have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff model, which includes more realistic friction forces and inertia. We find that the latter model with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton models and the usual Burridge-Knopoff model with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of earthquakes and other driven dissipative systems.

  19. Assessing the benefit of 3D a priori models for earthquake location

    NASA Astrophysics Data System (ADS)

    Tilmann, F. J.; Manzanares, A.; Peters, K.; Kahle, R. L.; Lange, D.; Saul, J.; Nooshiri, N.

    2014-12-01

    Earthquake location in 1D Earth models is a routine procedure. Particularly in environments such as subduction zones where the network geometry is biased and lateral velocity variations are large, the use of a 1D model can lead to strongly biased solutions. This is well-known and it is therefore usually preferred to use three-dimensional models, e.g. from local earthquake tomography. Efficient codes for earthquake location in 3D models are available for routine use, for example NonLinLoc. However, tomographic studies are time-consuming to carry out, and a sufficient number of data might not always be available. However, in many cases, information about the three-dimensional velocity structure is available in the form of refraction surveys or other constraints such as gravity or receiver functions based models. Failing that, global or regional scale crustal models could be employed. However, it is not obvious that models derived using different types of data lead to better location results than an optimised 1D velocity model. On the other hand, correct interpretation of seismicity patterns often requires comparison and exaxt positioning in pre-existing velocity models. In this presentation we draw on examples from the Chilean and Sumatran margins as well as a mid-ocean ridge environments, using both data and synthetic examples to investigate under what conditions the use of a priori 3D models is expected to result in improved location results and modifies interpretation. Furthermore, we introduce MATLAB tools that facilitate the creation of three-dimensional models suitable for earthquake location from refraction profiles, CRUST1 and SLAB1.0 and other model types.

  20. Fault Interaction and Earthquake Migration in Mid-Continents: Insights from Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Liu, M.; Lu, Y.; Chen, L.; Luo, G.; Wang, H.

    2011-12-01

    Historic records in North China and other mid-continents show large earthquakes migrating among widespread fault systems. Mechanical coupling of these faults is indicated by complimentary seismic moment release on these faults. In a conceptual model (Liu et al., 2011, Lithosphere), the long-distance fault interaction and earthquake migration are explained as the consequences of regional stress readjustment among a system of intraplate faults that collectively accommodates tectonic loading at the plate boundaries. In such a system, failure of one fault (a large earthquake) can cause stress shifting on all other faults. Here we report preliminary results of numerical investigations of such long-distance fault interaction in mid-continents. In a set of elastic models, we have a model crust with internal faults loaded from the boundaries, and calculate the stress distribution on the faults when the system reaches equilibrium. We compare the results with those of a new model that has one or more of the faults weakened (ruptured). The results show that failure of one fault can cause up to a few MPa of stress changes on other faults over a large distance; the magnitude of the stress change and the radius of the impacted area are much greater than those of the static Coulomb stress changes associated with dislocation on the fault plane. In time-dependent viscoelasto-plastic models, we found that variations of seismicity on one fault can significantly affect the loading rates on other faults that share the same tectonic loading. Similar fault interactions are also found in complex plate boundary fault systems, such as between the San Andreas Fault and the San Jacinto Fault in southern California. The spatially migrating earthquakes resulting from the long-distance fault interactions in mid-continents can cause different spatial patterns of seismicity when observed through different time-windows. These results have important implications for assessing earthquake hazards in

  1. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  2. Physics-Based Predictive Simulation Models for Earthquake Generation at Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Matsu'Ura, M.

    2002-12-01

    In the last decade there has been great progress in the physics of earthquake generation; that is, the introduction of laboratory-based fault constitutive laws as a basic equation governing earthquake rupture and the quantitative description of tectonic loading driven by plate motion. Incorporating a fault constitutive law into continuum mechanics, we can develop a physics-based_@simulation model for the entire earthquake generation process. For realistic simulation of earthquake generation, however, we need a very large, high-speed computer system. In Japan, fortunately, the Earth Simulator, which is a high performance, massively parallel-processing computer system with 10 TB memories and 40 TFLOPS peak speed, has been completed. The completion of the Earth Simulator and advance in numerical simulation methodology are bringing our vision within reach. In general, the earthquake generation cycle consists of tectonic loading due to relative plate motion, quasi-static rupture nucleation, dynamic rupture propagation and stop, and restoration of fault strength. The basic equations governing the entire earthquake generation cycle consists of an elastic/viscoelastic slip-response function that relates fault slip to shear stress change and a fault constitutive law that prescribes change in shear strength with fault slip and contact time. The shear stress and the shear strength are related with each other through the boundary conditions on the fault. The driving force of this system is observed relative plate motion. The system to describe the earthquake generation cycle is conceptually quite simple. The complexity in practical modelling mainly comes from complexity in structure of the real earth. Since 1998 our group have conducted the Crustal Activity Modelling Program (CAMP), which is one of the three main programs composing the Solid Earth Simulator project. The aim of CAMP is to develop a physics-based predictive simulation model for the entire earthquake generation

  3. Statistical Analysis of the Surface Slip Profiles and Slip Models for the 2008 Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Lavallee, D.; Shao, G.; Ji, C.

    2009-12-01

    The 2008 Wenchuan earthquake provides a remarkable opportunity to study the statistical properties of slip profiles recorded at the surface. During the M 8 Wenchuan earthquake, the surface ruptured over 300 km along the Longmenshan fault system. The surface slip profiles have been measured along the fault for a distance of the order of 270 km without any significant change in the strike direction. Field investigations suggest that the earthquake generated a 240 km surface rupture along the Beichuan segment and 72 km surface rupture along the Guanxian segment. Maximum vertical and horizontal slip of 10 m and 4.9 m have been observed along the Beichuan fault. Measurements include the displacement parallel and perpendicular to the fault as well as the width of the rupture zone. However, the recorded earthquake slip profiles are irregularly sampled. Traditional algorithms used to compute the discrete Fourier transform are developed for data sampled at regularly spaced intervals. It should be noted that interpolating the slip profile over a regular grid is not appropriate when investigating the spectrum functional behavior or when computing the discrete Fourier transform. Interpolation introduces bias in the estimation of the Fourier transform that adds artificial correlation to the original data. To avoid this problem, we developed an algorithm to compute the Fourier transform of irregularly sampled data. It consists essentially in determining the coefficients that best fit the data to the Sine and Cosine functions at a given wave number. We compute the power spectrum of the slip profiles of the Wenchuan earthquakes. In addition, we also compute the power spectrum for the slip inversions computed for the Wenchuan earthquakes. To model the functional behavior of the spectrum curves, we consider two functions: the power law function and the von Karman function. For all the slip models, we compute the parameters of the power law function and the von Karman function that

  4. Numerical earthquake models of the 2013 Nantou, Taiwan, earthquake series: Characteristics of source rupture processes, strong ground motions and their tectonic implication

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Yeh, Te-Yang; Huang, Hsin-Hua; Lin, Cheng-Horng

    2015-11-01

    On 27 March and 2 June 2013, two large earthquakes with magnitudes of ML 6.2 and ML 6.5, named the Nantou earthquake series, struck central Taiwan. These two events were located at depths of 15-20 km, which implied that the mid-crust of central Taiwan is an active seismogenic area even though the subsurface structures have not been well established. To determine the origins of the Nantou earthquake series, we investigated both the rupture processes and seismic wave propagations by employing inverse and forward numerical simulation techniques. Source inversion results indicated that one event ruptured from middle to shallow crust in the northwest direction, while the other ruptured towards the southwest. Simulations of 3-D wave propagation showed that the rupture characteristics of the two events result in distinct directivity effects with different amplified shaking patterns. From the results of numerical earthquake modeling, we deduced that the occurrence of the Nantou earthquake series may be related to stress release from the easternmost edge of a preexistent strong basement in central Taiwan.

  5. Family number non-conservation induced by the supersymmetric mixing of scalar leptons

    SciTech Connect

    Levine, M.J.S.

    1987-08-01

    The most egregious aspect of (N = 1) supersymmetric theories is that each particle state is accompanied by a 'super-partner', a state with identical quantum numbers save that it differs in spin by one half unit. For the leptons these are scalars and are called ''sleptons'', or scalar leptons. These consist of the charged sleptons (selectron, smuon, stau) and the scalar neutrinos ('sneutrinos'). We examine a model of supersymmetry with soft breaking terms in the electroweak sector. Explicit mixing among the scalar leptons results in a number of effects, principally non-conservation of lepton family number. Comparison with experiment permits us to place constraints upon the model. 49 refs., 12 figs.

  6. Planning a Preliminary program for Earthquake Loss Estimation and Emergency Operation by Three-dimensional Structural Model of Active Faults

    NASA Astrophysics Data System (ADS)

    Ke, M. C.

    2015-12-01

    Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.

  7. Irreversible thermodynamic model for accelerated moment release and atmospheric radon concentration prior to large earthquakes

    NASA Astrophysics Data System (ADS)

    Kawada, Y.; Nagahama, H.; Omori, Y.; Yasuoka, Y.; Shinogi, M.

    2006-12-01

    Accelerated moment release is often preceded by large earthquakes, and defined by rate of cumulative Benioff strain following power-law time-to-failure relation. This temporal seismicity pattern is investigated in terms of irreversible thermodynamics model. The model is regulated by the Helmholtz free energy defined by the macroscopic stress-strain relation and internal state variables (generalized coordinates). Damage and damage evolution are represented by the internal state variables. In the condition, huge number of the internal state variables has each specific relaxation time, while a set of the time evolution shows a temporal power-law behavior. The irreversible thermodynamic model reduces to a fiber-bundle model and experimentally-based constitutive law of rocks, and predicts the form of accelerated moment release. Based on the model, we can also discuss the increase in atmospheric radon concentration prior to the 1995 Kobe earthquake.

  8. Under the hood of the earthquake machine: toward predictive modeling of the seismic cycle.

    PubMed

    Barbot, Sylvain; Lapusta, Nadia; Avouac, Jean-Philippe

    2012-05-11

    Advances in observational, laboratory, and modeling techniques open the way to the development of physical models of the seismic cycle with potentially predictive power. To explore that possibility, we developed an integrative and fully dynamic model of the Parkfield segment of the San Andreas Fault. The model succeeds in reproducing a realistic earthquake sequence of irregular moment magnitude (M(w)) 6.0 main shocks--including events similar to the ones in 1966 and 2004--and provides an excellent match for the detailed interseismic, coseismic, and postseismic observations collected along this fault during the most recent earthquake cycle. Such calibrated physical models provide new ways to assess seismic hazards and forecast seismicity response to perturbations of natural or anthropogenic origins. PMID:22582259

  9. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    USGS Publications Warehouse

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  10. Collective properties of injection-induced earthquake sequences: 1. Model description and directivity bias

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny

    2016-05-01

    Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.