Simulation models for conservative and nonconservative solute transport in streams
Runkel, R.L.
1995-01-01
Solute transport in streams is governed by a suite of hydrologic and chemical processes. Interactions between hydrologic processes and chemical reactions may be quantified through a combination of field-scale experimentation and simulation modeling. Two mathematical models that simulate conservative and nonconservative solute transport in streams are presented. A model for conservative solutes that considers One Dimensional Transport with Inflow and Storage (OTIS) may be used in conjunction with tracer-dilution methods to quantify hydrologic transport processes (advection, dispersion, lateral inflow and transient storage). For nonconservative solutes, a model known as OTEQ may be used to quantify chemical processes within the context of hydrologic transport. OTEQ combines the transport mechanisms in OTIS with a chemical equilibrium sub-model that considers complexation, precipitation/dissolution and sorption. OTEQ has been used to quantify processes affecting trace metals in two streams in the Rocky Mountains of Colorado, USA.
Improvements in Nonconservative Force Modelling for TOPEX/POSEIDON
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Rowlands, David D.; Chinn, Douglas S.; Kubitschek, Daniel G.; Luthcke, Scott B.; Zelensky, Nikita B.; Born, George H.
1999-01-01
It was recognized prior to the launch of TOPEX/POSEIDON, that the most important source of orbit error other than the gravity field, was due to nonconservative force modelling. Accordingly, an intensive effort was undertaken to study the nonconservative forces acting on the spacecraft using detailed finite element modelling (Antreasian, 1992; Antreasian and Rosborough, 1992). However, this detailed modelling was not suitable for orbit determination, and a simplified eight plate "box-wing" model was developed that took into account the aggregate effect of the various materials and associated thermal properties of each spacecraft surface. The a priori model was later tuned post launch with actual tracking data [Nerem et al., 1994; Marshall and Luthcke, 1994; Marshall et al., 1995]. More recently, Kubitschek (1997] developed a newer box-wing model for TOPEX/POSEIDON, which included updated material properties, accounted for a solar array deflection, and modelled solar array warping due to thermal effects. We have used this updated model as a basis to retune the macromodel for TOPEX/POSEIDON, and report on preliminary results using at least 36 cycles (one year) of SLR and DORIS data in 1993.
Phase Diagram and Density Large Deviations of a Nonconserving ABC Model
NASA Astrophysics Data System (ADS)
Cohen, O.; Mukamel, D.
2012-02-01
The effect of particle-nonconserving processes on the steady state of driven diffusive systems is studied within the context of a generalized ABC model. It is shown that in the limit of slow nonconserving processes, the large deviation function of the overall particle density can be computed by making use of the steady-state density profile of the conserving model. In this limit one can define a chemical potential and identify first order transitions via Maxwell’s construction, similarly to what is done in equilibrium systems. This method may be applied to other driven models subjected to slow nonconserving dynamics.
Further seismic properties of a spring-block earthquake model
NASA Astrophysics Data System (ADS)
Angulo-Brown, F.; Muñoz-Diosdado, A.
1999-11-01
Within the context of self-organized critical systems, Olami et al. (OFC) (1992) proposed a spring-block earthquake model. This model is non-conservative and reproduces some seismic properties such as the Gutenberg-Richter law for the size distribution of earthquakes. In this paper we study further seismic properties of the OFC model and we find the stair-shaped curves of the cumulative seismicity. We also find that in the long term these curves have a characteristic straight-line envelope of constant slope that works as an attractor of the cumulative seismicity, and that these slopes depend on the system size and cannot be arbitrarily large. Finally, we report that in the OFC model the recurrence time distribution for large events follows a log-normal behaviour for some non-conservation levels.
An uncertainty inclusive un-mixing model to identify tracer non-conservativeness
NASA Astrophysics Data System (ADS)
Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.
2015-04-01
Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (<40°C), sieved (125 µm) and analysed for mineral magnetic susceptibility, anhysteretic remanence and iso-thermal remanence, and geochemical elements Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn (following microwave-assisted acid digestion). Discriminant analysis was used to reduce the number of tracer numbers before un-mixing. Tracer non-conservativeness
NASA Technical Reports Server (NTRS)
Luthcke, S. B.; Marshall, J. A.
1992-01-01
The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.
NASA Astrophysics Data System (ADS)
Charpentier, Arthur; Durand, Marilou
2015-07-01
In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.
NASA Astrophysics Data System (ADS)
Tailleux, Rémi
2015-04-01
This paper seeks to elucidate the fundamental differences between the nonconservation of potential temperature and that of Conservative Temperature, in order to better understand the relative merits of each quantity for use as the heat variable in numerical ocean models. The main result is that potential temperature is found to behave similarly to entropy, in the sense that its nonconservation primarily reflects production/destruction by surface heat and freshwater fluxes; in contrast, the nonconservation of Conservative Temperature is found to reflect primarily the overall compressible work of expansion/contraction. This paper then shows how this can be exploited to constrain the nonconservation of potential temperature and entropy from observed surface heat fluxes, and the nonconservation of Conservative Temperature from published estimates of the mechanical energy budgets of ocean numerical models. Finally, the paper shows how to modify the evolution equation for potential temperature so that it is exactly equivalent to using an exactly conservative evolution equation for Conservative Temperature, as was recently recommended by IOC et al. (2010). This result should in principle allow ocean modellers to test the equivalence between the two formulations, and to indirectly investigate to what extent the budget of derived nonconservative quantities such as buoyancy and entropy can be expected to be accurately represented in ocean models.
NASA Astrophysics Data System (ADS)
Long, Zi-Xuan; Zhang, Yi
2014-11-01
This paper focuses on the Noether symmetries and the conserved quantities for both holonomic and nonholonomic systems based on a new non-conservative dynamical model introduced by El-Nabulsi. First, the El-Nabulsi dynamical model which is based on a fractional integral extended by periodic laws is introduced, and El-Nabulsi—Hamilton's canonical equations for non-conservative Hamilton system with holonomic or nonholonomic constraints are established. Second, the definitions and criteria of El-Nabulsi—Noether symmetrical transformations and quasi-symmetrical transformations are presented in terms of the invariance of El-Nabulsi—Hamilton action under the infinitesimal transformations of the group. Finally, Noether's theorems for the non-conservative Hamilton system under the El-Nabulsi dynamical system are established, which reveal the relationship between the Noether symmetry and the conserved quantity of the system.
Non-conservative GNSS satellite modeling: long-term orbit behavior
NASA Astrophysics Data System (ADS)
Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.; Sosnica, K.; Fritsche, M.
2012-04-01
Modeling of non-conservative forces is a key issue for precise orbit determination of GNSS satellites. Furthermore, mismodeling of these forces has the potential to explain orbit-related frequencies found in GPS-derived station coordinates and geocenter, as well as the observed bias in the SLR-GPS residuals. Due to the complexity of the non-conservative forces, usually they have been compensated by empirical models based on the real in-orbit behavior of the satellites. Recent studies have focused on the physical/analytical modeling of solar radiation pressure, Earth radiation pressure, thermal effects, antenna thrust, among different effects. However, it has been demonstrated that pure physical models fail to predict the real orbit behavior with sufficient accuracy. In this study we use a recently developed solar radiation pressure model based on the physical interaction between solar radiation and satellite, but also capable of fitting the GNSS tracking data, called adjustable box-wing model. Furthermore, Earth radiation pressure and antenna thrust are included as a priori acceleration. The adjustable parameters of the box-wing model are surface optical properties, the so-called Y-bias and a parameter capable of compensating for non-nominal orientation of the solar panels. Using the adjustable box-wing model a multi-year GPS/GLONASS solution has been computed, using a processing scheme derived from CODE (Center for Orbit Determination in Europe). This multi-year solution allows studying the long-term behavior of satellite orbits, box-wing parameters and geodetic parameters like station coordinates and geocenter. Moreover, the accuracy of GNSS orbits is assessed by using SLR data. This evaluation also allows testing, whether the current SLR-GPS bias could be further reduced.
Nonextensive models for earthquakes
NASA Astrophysics Data System (ADS)
Silva, R.; França, G. S.; Vilar, C. S.; Alcaniz, J. S.
2006-02-01
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment γ∝r3 . The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter q , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.
Nonextensive models for earthquakes.
Silva, R; França, G S; Vilar, C S; Alcaniz, J S
2006-02-01
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude. PMID:16605393
Two models for earthquake forerunners
Mjachkin, V.I.; Brace, W.F.; Sobolev, G.A.; Dieterich, J.H.
1975-01-01
Similar precursory phenomena have been observed before earthquakes in the United States, the Soviet Union, Japan, and China. Two quite different physical models are used to explain these phenomena. According to a model developed by US seismologists, the so-called dilatancy diffusion model, the earthquake occurs near maximum stress, following a period of dilatant crack expansion. Diffusion of water in and out of the dilatant volume is required to explain the recovery of seismic velocity before the earthquake. According to a model developed by Soviet scientists growth of cracks is also involved but diffusion of water in and out of the focal region is not required. With this model, the earthquake is assumed to occur during a period of falling stress and recovery of velocity here is due to crack closure as stress relaxes. In general, the dilatancy diffusion model gives a peaked precursor form, whereas the dry model gives a bay form, in which recovery is well under way before the earthquake. A number of field observations should help to distinguish between the two models: study of post-earthquake recovery, time variation of stress and pore pressure in the focal region, the occurrence of pre-existing faults, and any changes in direction of precursory phenomena during the anomalous period. ?? 1975 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Wilusz, D. C.; Harman, C. J.; Ball, W. P.
2014-12-01
Modeling the dynamics of chemical transport from the landscape to streams is necessary for water quality management. Previous work has shown that estimates of the distribution of water age in streams, the transit time distribution (TTD), can improve prediction of the concentration of conservative tracers (i.e., ones that "follow the water") based on upstream watershed inputs. A major challenge however has been accounting for climate and transport variability when estimating TDDs at the catchment scale. In this regard, Harman (2014, in review) proposed the Omega modeling framework capable of using watershed hydraulic fluxes to approximate the time-varying TTD. The approach was previously applied to the Plynlimon research watershed in Wales to simulate stream concentration dynamics of a conservative tracer (chloride) including 1/f attenuation of the power spectra density. In this study we explore the extent to which TTDs estimated by the Omega model vary with the concentration of non-conservative tracers (i.e., ones whose concentrations are also affected by transformations and interactions with other phases). First we test the hypothesis that the TTD calibrated in Plynlimon can explain a large part of the variation in non-conservative stream water constituents associated with storm flow (acidity, Al, DOC, Fe) and base flow (Ca, Si). While controlling for discharge, we show a correlation between the percentage of water of different ages and constituent concentration. Second, we test the hypothesis that TTDs help explain variation in stream nitrate concentration, which is of particular interest for pollution control but can be highly non-conservative. We compare simulation runs from Plynlimon and the agricultural Choptank watershed in Maryland, USA. Following a top-down approach, we estimate nitrate concentration as if it were a conservative tracer and examine the structure of residuals at different temporal resolutions. Finally, we consider model modifications to
Bayesian kinematic earthquake source models
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze
2016-02-01
The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze
2016-06-01
The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.
Modeling, Forecasting and Mitigating Extreme Earthquakes
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
GEM - The Global Earthquake Model
NASA Astrophysics Data System (ADS)
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
New geological perspectives on earthquake recurrence models
Schwartz, D.P.
1997-02-01
In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release.
Asperity Model of an Earthquake - Dynamic Problem
Johnson, Lane R.; Nadeau, Robert M.
2003-05-02
We develop an earthquake asperity model that explains previously determined empirical scaling relationships for repeating earthquakes along the San Andreas fault in central California. The model assumes that motion on the fault is resisted primarily by a patch of small strong asperities that interact with each other to increase the amount of displacement needed to cause failure. This asperity patch is surrounded by a much weaker fault that continually creeps in response to tectonic stress. Extending outward from the asperity patch into the creeping part of the fault is a shadow region where a displacement deficit exists. Starting with these basic concepts, together with the analytical solution for the exterior crack problem, the consideration of incremental changes in the size of the asperity patch leads to differential equations that can be solved to yield a complete static model of an earthquake. Equations for scalar seismic moment, the radius of the asperity patch, and the radius of the displacement shadow are all specified as functions of the displacement deficit that has accumulated on the asperity patch. The model predicts that the repeat time for earthquakes should be proportional to the scalar moment to the 1/6 power, which is in agreement with empirical results for repeating earthquakes. The model has two free parameters, a critical slip distance dc and a scaled radius of a single asperity. Numerical values of 0.20 and 0.17 cm, respectively, for these two parameters will reproduce the empirical results, but this choice is not unique. Assuming that the asperity patches are distributed on the fault surface in a random fractal manner leads to a frequency size distribution of earthquakes that agrees with the Gutenberg Richter formula and a simple relationship between the b-value and the fractal dimension. We also show that the basic features of the theoretical model can be simulated with numerical calculations employing the boundary integral method.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic
Role of Bioindicators In Earthquake Modelling
NASA Astrophysics Data System (ADS)
Zelinsky, I. P.; Melkonyan, D. V.; Astrova, N. G.
On the basis of experimental researches of influence of sound waves on bacteria- indicators a model of earthquake is constructed. It is revealed that the growth of num- ber of bacteria depends on frequency of a sound wave, influencing on the bacterium, (the less frequency of a sound wave, the faster takes place a growth). It is shown, that at absorption of energy of a sound wave by bacterium occurs growth of concentration of isopotential lines of biodynamic field in a bacterium. This process leads to the bac- terium braking and heating. By structure of deformation of lines of a biodynamic field it is possible to predict various geodynamic processes including earthquakes.
On the earthquake predictability of fault interaction models
Marzocchi, W; Melini, D
2014-01-01
Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643
Aftershocks in a frictional earthquake model.
Braun, O M; Tosatti, Erio
2014-09-01
Inspired by spring-block models, we elaborate a "minimal" physical model of earthquakes which reproduces two main empirical seismological laws, the Gutenberg-Richter law and the Omori aftershock law. Our point is to demonstrate that the simultaneous incorporation of aging of contacts in the sliding interface and of elasticity of the sliding plates constitutes the minimal ingredients to account for both laws within the same frictional model. PMID:25314453
A simplified spring-block model of earthquakes
Brown, S.R. ); Rundle, J.B. ); Scholz C.H.
1991-02-01
The time interval between earthquakes is much larger than the actual time involved during slip in an individual event. The authors have used this fact to construct a cellular automaton model of earthquakes. This model describes the time evolution of a 2-D system of coupled masses and springs sliding on a frictional surface. The model exhibits power law frequency-size relations and can exhibit large earthquakes with the same scatter in the recurrence time observed for actual earthquakes.
Extreme Earthquake Risk Estimation by Hybrid Modeling
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.
2012-12-01
The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican
Modeling coupled avulsion and earthquake timescale dynamics
NASA Astrophysics Data System (ADS)
Reitz, M. D.; Steckler, M. S.; Paola, C.; Seeber, L.
2014-12-01
River avulsions and earthquakes can be hazardous events, and many researchers work to better understand and predict their timescales. Improvements in the understanding of the intrinsic processes of deposition and strain accumulation that lead to these events have resulted in better constraints on the timescales of each process individually. There are however several mechanisms by which these two systems may plausibly become linked. River deposition and avulsion can affect the stress on underlying faults through differential loading by sediment or water. Conversely, earthquakes can affect river avulsion patterns through altering the topography. These interactions may alter the event recurrence timescales, but this dynamic has not yet been explored. We present results of a simple numerical model, in which two systems have intrinsic rates of approach to failure thresholds, but the state of one system contributes to the other's approach to failure through coupling functions. The model is first explored for the simplest case of two linear approaches to failure, and linearly proportional coupling terms. Intriguing coupling dynamics emerge: the system settles into cycles of repeating earthquake and avulsion timescales, which are approached at an exponential decay rate that depends on the coupling terms. The ratio of the number of events of each type and the timescale values also depend on the coupling coefficients and the threshold values. We then adapt the model to a more complex and realistic scenario, in which a river avulses between either side of a fault, with parameters corresponding to the Brahmaputra River / Dauki fault system in Bangladesh. Here the tectonic activity alters the topography by gradually subsiding during the interseismic time, and abruptly increasing during an earthquake. The river strengthens the fault by sediment loading when in one path, and weakens it when in the other. We show this coupling can significantly affect earthquake and avulsion
Strong Earthquake Modelling in Cuban Territory
NASA Astrophysics Data System (ADS)
Moreno Toiran, B.; Alvarez Gomez, J.; Vaccari, F.
2013-05-01
A seismic hazard map for the Cuban territory was obtained by using waveforms modelling methods. The input data set consists of seismogenic zones, focal mechanisms, seismic wave velocity models and earthquake catalogue. Several maps were generated with the predominant periods corresponding to the maximum displacement (Dmax) and maximum velocity (Vmax) as well as the design ground acceleration (DGA). In order to get this result they were computed thousand of synthetic seismograms with the knowledge of the physical processes of earthquake generation, the levels of seismicity and wave propagation in anelastic media. The synthetic seismograms were generated at 1 Hz of frequency at a regular grid of 0.2 x 0.2 degree with the modal summation technique. Considering the strongest earthquake in the catalogue (7.3 magnitude Richter) the DGA maximum amplitudes are between 0.30g and 0.45g in the Santiago de Cuba region. If the maximum possible earthquake (8.0 magnitude Richter) is considered, the DGA can range between 0.6g and 0.9g in the same zone. For the first case (magnitude 7.3) the maximum velocities are between 60 - 84 cm/sec at periods between 1- 3 seconds and the maximum displacements are between 15 - 28 cm at periods between 4 - 5 seconds.
ERIC Educational Resources Information Center
Markle, Sandra
1987-01-01
A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...
CP nonconservation in dynamically broken gauge theories
Lane, K.
1981-01-01
The recent proposal of Eichten, Lane, and Preskill for CP nonconservation in electroweak gauge theories with dynamical symmetry breaking is reviewed. Through the alignment of the vacuum with the explicit chiral symmetry breaking Hamiltonian, these theories provide a natural way to understand the dynamical origin of CP nonconservation. Special attention is paid to the problem of strong CP violation. Even through all vacuum angles are zero, this problem is not automatically avoided. In the absence of strong CP violation, the neutron electric dipole moment is expected to be 10/sup -24/-10/sup -26/ e-cm. A new class of models is proposed in which both strong CP violation and large /..delta..S/ = 2 effects may be avoided. In these models, /..delta..C/ = 2 processes such as D/sup o/ D/sup -o/ mixing may be large enough to observe.
Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.
2011-01-01
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355
Human casualties in earthquakes: modelling and mitigation
Spence, R.J.S.; So, E.K.M.
2011-01-01
Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.
Probabilistic earthquake location and 3-D velocity models in routine earthquake location
NASA Astrophysics Data System (ADS)
Lomax, A.; Husen, S.
2003-12-01
Earthquake monitoring agencies, such as local networks or CTBTO, are faced with the dilemma of providing routine earthquake locations in near real-time with high precision and meaningful uncertainty information. Traditionally, routine earthquake locations are obtained from linearized inversion using layered seismic velocity models. This approach is fast and simple. However, uncertainties derived from a linear approximation to a set of non-linear equations can be imprecise, unreliable, or even misleading. In addition, 1-D velocity models are a poor approximation to real Earth structure in tectonically complex regions. In this paper, we discuss the routine location of earthquakes in near real-time with high precision using non-linear, probabilistic location methods and 3-D velocity models. The combination of non-linear, global search algorithms with probabilistic earthquake location provides a fast and reliable tool for earthquake location that can be used with any kind of velocity model. The probabilistic solution to the earthquake location includes a complete description of location uncertainties, which may be irregular and multimodal. We present applications of this approach to determine seismicity in Switzerland and in Yellowstone National Park, WY. Comparing our earthquake locations to earthquake locations obtained using linearized inversion and 1-D velocity models clearly demonstrates the advantages of probabilistic earthquake location and 3-D velocity models. For example, the more complete and reliable uncertainty information of non-linear, probabilistic earthquake location greatly facilitates the identification of poorly constrained hypocenters. Such events are often not identified in linearized earthquake location, since the location uncertainties are determined with a simplified, localized and approximate Gaussian statistic.
Foreshock and aftershocks in simple earthquake models.
Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R
2015-02-27
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism. PMID:25768785
Foreshock and Aftershocks in Simple Earthquake Models
NASA Astrophysics Data System (ADS)
Kazemian, J.; Tiampo, K. F.; Klein, W.; Dominguez, R.
2015-02-01
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
ERIC Educational Resources Information Center
Walter, Edward J.
1977-01-01
Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)
ERIC Educational Resources Information Center
Pakiser, Louis C.
One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Slip complexity in earthquake fault models.
Rice, J R; Ben-Zion, Y
1996-01-01
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
NASA Astrophysics Data System (ADS)
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
Quasiperiodic Events in an Earthquake Model
Ramos, O.; Maaloey, K.J.; Altshuler, E.
2006-03-10
We introduce a modification of the Olami-Feder-Christensen earthquake model [Phys. Rev. Lett. 68, 1244 (1992)] in order to improve the resemblence with the Burridge-Knopoff mechanical model and with possible laboratory experiments. A constant and finite force continually drives the system, resulting in instantaneous relaxations. Dynamical disorder is added to the thresholds following a narrow distribution. We find quasiperiodic behavior in the avalanche time series with a period proportional to the degree of dissipation of the system. Periodicity is not as robust as criticality when the threshold force distribution widens, or when an increasing noise is introduced in the values of the dissipation.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
ERIC Educational Resources Information Center
Roper, Paul J.; Roper, Jere Gerard
1974-01-01
Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)
CP nonconservation without elementary scalar fields
Eichten, E.; Lane, K.; Preskill, J.
1980-07-28
Dynamically broken gauge theories of electroweak interactions provide a natural mechanism for generating CP nonconservation. Even if all vacuum angles are unobservable, strong CP nonconservation is not automatically avoided. In the absence of strong CP nonconservation, the neutron electric dipole moment is expected to be of the order 10/sup -24/ excm.
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the
Jaiswal, Kishor; Wald, David J.; Earle, Paul; Porter, Keith A.; Hearne, Mike
2011-01-01
Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.
Mechanical model of an earthquake fault
NASA Technical Reports Server (NTRS)
Carlson, J. M.; Langer, J. S.
1989-01-01
The dynamic behavior of a simple mechanical model of an earthquake fault is studied. This model, introduced originally by Burridge and Knopoff (1967), consists of an elastically coupled chain of masses in contact with a moving rough surface. The present version of the model retains the full Newtonian dynamics with inertial effects and contains no externally imposed stochasticity or spatial inhomogeneity. The only nonlinear feature is a velocity-weakening stick-slip friction force between the masses and the moving surface. This system is being driven persistently toward a slipping instability and therefore exhibits noisy sequences of earthquakelike events. These events are observed in numerical simulations, and many of their features can be predicted analytically.
Conservative perturbation theory for nonconservative systems
NASA Astrophysics Data System (ADS)
Shah, Tirth; Chattopadhyay, Rohitashwa; Vaidya, Kedar; Chakraborty, Sagar
2015-12-01
In this paper, we show how to use canonical perturbation theory for dissipative dynamical systems capable of showing limit-cycle oscillations. Thus, our work surmounts the hitherto perceived barrier for canonical perturbation theory that it can be applied only to a class of conservative systems, viz., Hamiltonian systems. In the process, we also find Hamiltonian structure for an important subset of Liénard system—a paradigmatic system for modeling isolated and asymptotic oscillatory state. We discuss the possibility of extending our method to encompass an even wider range of nonconservative systems.
Strain-softening instability model for the san fernando earthquake
Stuart, W.D.
1979-01-01
Changes in the ground elevation observed before and immediately after the 1971 San Fernando, California, earthquake are consistent with a theoretical model in which fault zone rocks are strain-softening after peak stress. The model implies that the slip rate of the fault increased to about 0.1 meter per year near the focus before the earthquake.
Classical mechanics of nonconservative systems.
Galley, Chad R
2013-04-26
Hamilton's principle of stationary action lies at the foundation of theoretical physics and is applied in many other disciplines from pure mathematics to economics. Despite its utility, Hamilton's principle has a subtle pitfall that often goes unnoticed in physics: it is formulated as a boundary value problem in time but is used to derive equations of motion that are solved with initial data. This subtlety can have undesirable effects. I present a formulation of Hamilton's principle that is compatible with initial value problems. Remarkably, this leads to a natural formulation for the Lagrangian and Hamiltonian dynamics of generic nonconservative systems, thereby filling a long-standing gap in classical mechanics. Thus, dissipative effects, for example, can be studied with new tools that may have applications in a variety of disciplines. The new formalism is demonstrated by two examples of nonconservative systems: an object moving in a fluid with viscous drag forces and a harmonic oscillator coupled to a dissipative environment. PMID:23679733
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may
NASA Astrophysics Data System (ADS)
Hovius, Niels; Marc, Odin; Meunier, Patrick
2016-04-01
Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a
NASA Technical Reports Server (NTRS)
April, G. C.; Liu, H. A.
1975-01-01
Total coliform group bacteria were selected to expand the mathematical modeling capabilities of the hydrodynamic and salinity models to understand their relationship to commercial fishing ventures within bay waters and to gain a clear insight into the effect that rivers draining into the bay have on water quality conditions. Parametric observations revealed that temperature factors and river flow rate have a pronounced effect on the concentration profiles, while wind conditions showed only slight effects. An examination of coliform group loading concentrations at constant river flow rates and temperature shows these loading changes have an appreciable influence on total coliform distribution within Mobile Bay.
NASA Astrophysics Data System (ADS)
Evje, Steinar; Wang, Wenjun; Wen, Huanyao
2016-09-01
In this paper, we consider a compressible two-fluid model with constant viscosity coefficients and unequal pressure functions {P^+neq P^-}. As mentioned in the seminal work by Bresch, Desjardins, et al. (Arch Rational Mech Anal 196:599-629, 2010) for the compressible two-fluid model, where {P^+=P^-} (common pressure) is used and capillarity effects are accounted for in terms of a third-order derivative of density, the case of constant viscosity coefficients cannot be handled in their settings. Besides, their analysis relies on a special choice for the density-dependent viscosity [refer also to another reference (Commun Math Phys 309:737-755, 2012) by Bresch, Huang and Li for a study of the same model in one dimension but without capillarity effects]. In this work, we obtain the global solution and its optimal decay rate (in time) with constant viscosity coefficients and some smallness assumptions. In particular, capillary pressure is taken into account in the sense that {Δ P=P^+ - P^-=fneq 0} where the difference function {f} is assumed to be a strictly decreasing function near the equilibrium relative to the fluid corresponding to {P^-}. This assumption plays an key role in the analysis and appears to have an essential stabilization effect on the model in question.
First Results of the Regional Earthquake Likelihood Models Experiment
Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.
2010-01-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).
Radiation reaction as a non-conservative force
NASA Astrophysics Data System (ADS)
Aashish, Sandeep; Haque, Asrarul
2016-09-01
We study a system of a finite size charged particle interacting with a radiation field by exploiting Hamilton’s principle for a non-conservative system recently introduced by Galley [1]. This formulation leads to the equation of motion of the charged particle that turns out to be the same as that obtained by Jackson [2]. We show that the radiation reaction stems from the non-conservative part of the effective action for a charged particle. We notice that a charge interacting with a radiation field modeled as a heat bath affords a way to justify that the radiation reaction is a non-conservative force. The topic is suitable for graduate courses on advanced electrodynamics and classical theory of fields.
Tullis, T E
1996-04-30
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. PMID:11607668
Tullis, T E
1996-01-01
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668
Parity nonconservation in hydrogen.
Dunford, R. W.; Holt, R. J.
2011-01-01
We discuss the prospects for parity violation experiments in atomic hydrogen and deuterium to contribute to testing the Standard Model (SM). We find that, if parity experiments in hydrogen can be done, they remain highly desirable because there is negligible atomic-physics uncertainty and low energy tests of weak neutral current interactions are needed to probe for new physics beyond the SM. Analysis of a generic APV experiment in deuterium indicates that a 0.3% measurement of C{sub 1D} requires development of a slow (77K) metastable beam of {approx} 5 x 10{sup 14}D(2S)s{sup -1} per hyperfine component. The advent of UV radiation from free electron laser (FEL) technology could allow production of such a beam.
Hutchings, L.
1992-01-01
This report outlines a method of using empirical Green's functions in an earthquake simulation program EMPSYN that provides realistic seismograms from potential earthquakes. The theory for using empirical Green's functions is developed, implementation of the theory in EMPSYN is outlined, and an example is presented where EMPSYN is used to synthesize observed records from the 1971 San Fernando earthquake. To provide useful synthetic ground motion data from potential earthquakes, synthetic seismograms should model frequencies from 0.5 to 15.0 Hz, the full wave-train energy distribution, and absolute amplitudes. However, high-frequency arrivals are stochastically dependent upon the inhomogeneous geologic structure and irregular fault rupture. The fault rupture can be modeled, but the stochastic nature of faulting is largely an unknown factor in the earthquake process. The effect of inhomogeneous geology can readily be incorporated into synthetic seismograms by using small earthquakes to obtain empirical Green's functions. Small earthquakes with source corner frequencies higher than the site recording limit f{sub max}, or much higher than the frequency of interest, effectively have impulsive point-fault dislocation sources, and their recordings are used as empirical Green's functions. Since empirical Green's functions are actual recordings at a site, they include the effects on seismic waves from all geologic inhomogeneities and include all recordable frequencies, absolute amplitudes, and all phases. They scale only in amplitude with differences in seismic moment. They can provide nearly the exact integrand to the representation relation. Furthermore, since their source events have spatial extent, they can be summed to simulate fault rupture without loss of information, thereby potentially computing the exact representation relation for an extended source earthquake.
ERIC Educational Resources Information Center
Hernandez, Hildo
2000-01-01
Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)
FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.
Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.
1985-01-01
The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
An earthquake model with interacting asperities
NASA Astrophysics Data System (ADS)
Johnson, Lane R.
2010-09-01
A model is presented that treats an earthquake as the failure of asperities in a manner consistent with modern concepts of sliding friction. The mathematical description of the model includes results for elliptical and circular asperities, oblique tectonic slip, static and dynamic solutions for slip on the fault, stress intensity factors, strain energy and second-order moment tensor. The equations that control interaction of asperities are derived and solved both in a quasi-static tectonic mode when none of the asperities are in the process of failing and a dynamic failure mode when asperities are failing and sending out slip pulses that can trigger failure of additional asperities. The model produces moment rate functions for each asperity failure so that, given an appropriate Green function, the radiation of elastic waves is a straightforward calculation. The model explains an observed scaling relationship between repeat time and seismic moment for repeating seismic events and is consistent with the properties of pseudo-tachylites treated as fossil asperities. Properties of the model are explored with simulations of seismic activity that results when a section of the fault containing a spatial distribution of asperities is subjected to tectonic slip. The simulations show that the failure of a group of strongly interacting asperities satisfies the same scaling relationship as the failure of individual asperities, and that realistic distributions of asperities on a fault plane lead to seismic activity consistent with probability estimates for the interaction of asperities and predicted values of the Gutenberg-Richter a and b values. General features of the model are the exterior crack solution as a theoretical foundation, a heterogeneous state of stress and strength on the fault, dynamic effects controlled by propagating slip pulses and radiated elastic waves with a broad frequency band.
Gershenzon, N I; Bykov, V G; Bambakidis, G
2009-05-01
The one-dimensional Frenkel-Kontorova (FK) model, well known from the theory of dislocations in crystal materials, is applied to the simulation of the process of nonelastic stress propagation along transform faults. Dynamic parameters of plate boundary earthquakes as well as slow earthquakes and afterslip are quantitatively described, including propagation velocity along the strike, plate boundary velocity during and after the strike, stress drop, displacement, extent of the rupture zone, and spatiotemporal distribution of stress and strain. The three fundamental speeds of plate movement, earthquake migration, and seismic waves are shown to be connected in framework of the continuum FK model. The magnitude of the strain wave velocity is a strong (almost exponential) function of accumulated stress or strain. It changes from a few km/s during earthquakes to a few dozen km per day, month, or year during afterslip and interearthquake periods. Results of the earthquake parameter calculation based on real data are in reasonable agreement with measured values. The distributions of aftershocks in this model are consistent with the Omori law for temporal distribution and a 1/r for the spatial distributions. PMID:19518576
Earthquake research: Premonitory models and the physics of crustal distortion
NASA Technical Reports Server (NTRS)
Whitcomb, J. H.
1981-01-01
Seismic, gravity, and electrical resistivity data, believed to be most relevent to development of earthquake premonitory models of the crust, are presented. Magnetotellurics (MT) are discussed. Radon investigations are reviewed.
Earthquake Forecasting in Northeast India using Energy Blocked Model
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, D. K.
2009-12-01
In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes
The Nonconservation of Potential Vorticity by a Dynamical Core
NASA Astrophysics Data System (ADS)
Saffin, Leo; Methven, John; Gray, Sue
2016-04-01
Numerical models of the atmosphere combine a dynamical core, which approximates solutions to the adiabatic, frictionless governing equations for fluid dynamics, with tendencies arising from the parametrization of other physical processes. Since potential vorticity (PV) is conserved following fluid flow in adiabatic, frictionless circumstances, it is possible to isolate the effects of non-conservative processes by accumulating PV changes in an air-mass relative framework. This ``PV tracer technique'' is used to accumulate separately the effects on PV of each of the different non-conservative processes represented in a numerical model of the atmosphere. Dynamical cores are not exactly conservative because they introduce, explicitly or implicitly, some level of dissipation and adjustment of prognostic model variables which acts to modify PV. Here, the PV tracers technique is extended to diagnose the cumulative effect of the non-conservation of PV by a dynamical core and its characteristics relative to the PV modification by parametrized physical processes. Quantification using the Met Office Unified Model reveals that the magnitude of the non-conservation of PV by the dynamical core is comparable to those from physical processes. Moreover, the residual of the PV budget, when tracing the effects of the dynamical core and physical processes, is at least an order of magnitude smaller than the PV tracers associated with the most active physical processes. The implication of this work is that the non-conservation of PV by a dynamical core can be assessed in case studies with a full suite of physics parametrizations and directly compared with the PV modification by parametrized physical processes.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Scaling and Nucleation in Models of Earthquake Faults
Klein, W.; Ferguson, C.; Rundle, J.
1997-05-01
We present an analysis of a slider block model of an earthquake fault which indicates the presence of metastable states ending in spinodals. We identify four parameters whose values determine the size and statistical distribution of the {open_quotes}earthquake{close_quotes} events. For values of these parameters consistent with real faults we obtain scaling of events associated not with critical point fluctuations but with the presence of nucleation events. {copyright} {ital 1997} {ital The American Physical Society}
NASA Astrophysics Data System (ADS)
Razafindrakoto, Hoby N. T.; Mai, P. Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran K. S.
2015-07-01
Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for
Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data
NASA Astrophysics Data System (ADS)
Funning, G. J.; Cockett, R.
2012-12-01
InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median
Analysing earthquake slip models with the spatial prediction comparison test
NASA Astrophysics Data System (ADS)
Zhang, Ling; Mai, P. Martin; Thingbaijam, Kiran K. S.; Razafindrakoto, Hoby N. T.; Genton, Marc G.
2015-01-01
Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (`model') and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.
Retrospective tests of hybrid operational earthquake forecasting models for Canterbury
NASA Astrophysics Data System (ADS)
Rhoades, D. A.; Liukis, M.; Christophersen, A.; Gerstenberger, M. C.
2016-01-01
The Canterbury, New Zealand, earthquake sequence, which began in September 2010, occurred in a region of low crustal deformation and previously low seismicity. Because, the ensuing seismicity in the region is likely to remain above previous levels for many years, a hybrid operational earthquake forecasting model for Canterbury was developed to inform decisions on building standards and urban planning for the rebuilding of Christchurch. The model estimates occurrence probabilities for magnitudes M ≥ 5.0 in the Canterbury region for each of the next 50 yr. It combines two short-term, two medium-term and four long-term forecasting models. The weight accorded to each individual model in the operational hybrid was determined by an expert elicitation process. A retrospective test of the operational hybrid model and of an earlier informally developed hybrid model in the whole New Zealand region has been carried out. The individual and hybrid models were installed in the New Zealand Earthquake Forecast Testing Centre and used to make retrospective annual forecasts of earthquakes with magnitude M > 4.95 from 1986 on, for time-lags up to 25 yr. All models underpredict the number of earthquakes due to an abnormally large number of earthquakes in the testing period since 2008 compared to those in the learning period. However, the operational hybrid model is more informative than any of the individual time-varying models for nearly all time-lags. Its information gain relative to a reference model of least information decreases as the time-lag increases to become zero at a time-lag of about 20 yr. An optimal hybrid model with the same mathematical form as the operational hybrid model was computed for each time-lag from the 26-yr test period. The time-varying component of the optimal hybrid is dominated by the medium-term models for time-lags up to 12 yr and has hardly any impact on the optimal hybrid model for greater time-lags. The optimal hybrid model is considerably more
An empirical model for global earthquake fatality estimation
Jaiswal, Kishor; Wald, David
2010-01-01
We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331
Parity nonconservation in atomic Zeeman transitions
Angstmann, E. J.; Dinh, T. H.; Flambaum, V. V.
2005-11-15
We discuss the possibility of measuring nuclear anapole moments in atomic Zeeman transitions and perform the necessary calculations. Advantages of using Zeeman transitions include variable transition frequencies and the possibility of enhancement of parity nonconservation effects.
Modeling the behavior of an earthquake base-isolated building.
Coveney, V. A.; Jamil, S.; Johnson, D. E.; Kulak, R. F.; Uras, R. A.
1997-11-26
Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.
Cowen, A R; Denney, J P
1994-04-01
On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439
Assessing a 3D smoothed seismicity model of induced earthquakes
NASA Astrophysics Data System (ADS)
Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan
2016-04-01
As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.
NASA Astrophysics Data System (ADS)
Ellsworth, W. L.; Matthews, M. V.; Simpson, R. W.
2001-12-01
A statistical mechanical description of elastic rebound is used to study earthquake interaction and stress transfer effects in a point process model of earthquakes. The model is a Brownian Relaxation Oscillator (BRO) in which a random walk (standard Brownian motion) is added to a steady tectonic loading to produce a stochastic load state process. Rupture occurs in this model when the load state reaches a critical value. The load state is a random variable and may be described at any point in time by its probability density. Load state evolves toward the failure threshold due to tectonic loading (drift), and diffuses due to Brownian motion (noise) according to a diffusion equation. The Brownian perturbation process formally represents the sum total of all factors, aside from tectonic loading, that govern rupture. Physically, these factors may include effects of earthquakes external to the source, aseismic loading, interaction effects within the source itself, healing, pore pressure evolution, etc. After a sufficiently long time, load state always evolves to a steady state probability density that is independent of the initial condition and completely described by the drift rate and noise scale. Earthquake interaction and stress transfer effects are modeled by an instantaneous change in the load state. A negative step reduces the probability of failure, while a positive step may either immediately trigger rupture or increase the failure probability (hazard). When the load state is far from failure, the effects are well-approximated by ``clock advances'' that shift the unperturbed hazard down or up, as appropriate for the sign of the step. However, when the load state is advanced in the earthquake cycle, the response is a sharp, temporally localized decrease or increase in hazard. Recovery of the hazard is characteristically ``Omori like'' ( ~ 1/t), which can be understood in terms of equilibrium thermodynamical considerations since state evolution is diffusion with
A Godunov scheme for solving hyperbolic systems in a nonconservative form
NASA Astrophysics Data System (ADS)
Zalzali, I.; Abbas, H.
2005-05-01
In this paper, we developed a Godunov scheme for solving nonconservative systems. The main idea of this method is a new type of projection which illustrated the essential role of the numerical viscosity to determine the solution with shocks for system in a nonconservative form. We apply our study to a system modeling elasticity and we observe a complete agreement between the theory and the numerical results.
Earthquake nucleation mechanisms and periodic loading: Models, Experiments, and Observations
NASA Astrophysics Data System (ADS)
Dahmen, K.; Brinkman, B.; Tsekenis, G.; Ben-Zion, Y.; Uhl, J.
2010-12-01
The project has two main goals: (a) Improve the understanding of how earthquakes are nucleated ¬ with specific focus on seismic response to periodic stresses (such as tidal or seasonal variations) (b) Use the results of (a) to infer on the possible existence of precursory activity before large earthquakes. A number of mechanisms have been proposed for the nucleation of earthquakes, including frictional nucleation (Dieterich 1987) and fracture (Lockner 1999, Beeler 2003). We study the relation between the observed rates of triggered seismicity, the period and amplitude of cyclic loadings and whether the observed seismic activity in response to periodic stresses can be used to identify the correct nucleation mechanism (or combination of mechanisms). A generalized version of the Ben-Zion and Rice model for disordered fault zones and results from related recent studies on dislocation dynamics and magnetization avalanches in slowly magnetized materials are used in the analysis (Ben-Zion et al. 2010; Dahmen et al. 2009). The analysis makes predictions for the statistics of macroscopic failure events of sheared materials in the presence of added cyclic loading, as a function of the period, amplitude, and noise in the system. The employed tools include analytical methods from statistical physics, the theory of phase transitions, and numerical simulations. The results will be compared to laboratory experiments and observations. References: Beeler, N.M., D.A. Lockner (2003). Why earthquakes correlate weakly with the solid Earth tides: effects of periodic stress on the rate and probability of earthquake occurrence. J. Geophys. Res.-Solid Earth 108, 2391-2407. Ben-Zion, Y. (2008). Collective Behavior of Earthquakes and Faults: Continuum-Discrete Transitions, Evolutionary Changes and Corresponding Dynamic Regimes, Rev. Geophysics, 46, RG4006, doi:10.1029/2008RG000260. Ben-Zion, Y., Dahmen, K. A. and J. T. Uhl (2010). A unifying phase diagram for the dynamics of sheared solids
Numerical modeling of shallow fault creep triggered by nearby earthquakes
NASA Astrophysics Data System (ADS)
Wei, M.; Liu, Y.; McGuire, J. J.
2011-12-01
The 2010 El Mayor-Cucapha Mw 7.2 earthquake is the largest earthquake that strikes southern California in the last 18 years. It has triggered shallow fault creep on many faults in Salton Trough, Southern California, making it at least the 8th time in the last 42 years that a local or regional earthquake has done so. However, the triggering mechanism of fault creep and its implications to seismic hazard and fault mechanics is still poorly understood. For example, what determines the relative importance of static triggering and dynamic triggering of fault creep? What can we learn about the local frictional properties and normal stress from the triggering of fault creep? To understand the triggering mechanism and constrain fault frictional properties, we simulate the triggered fault creep on the Superstition Hills Fault (SHF), Salton Trough, Southern California. We use realistic static and dynamic shaking due to nearby earthquakes as stress perturbations to a 2D (in a 3D medium) planar fault model with rate-and-state frictional property variations both in depth and along strike. Unlike many previous studies, we focus on the simulation of triggered shallow fault creep instead of earthquakes. Our fault model can reproduce the triggering process, by static, dynamic , and combined stress perturbation. Preliminary results show that the magnitude of perturbation relative to the original stress level is an important parameter. In the static case, perturbation of 1% of normal stress trigger delayed fault creep whereas 10% of normal stress generate instantaneous creep. In the dynamic case, a change of two times in magnitude of perturbation can result in difference of triggered creep in several orders of magnitude. We explore combined triggering with different ratio of static and dynamic perturbation. The timing of triggering in a earthquake cycle is also important. With measurements on triggered creep on the SHF, we constrain local stress level and frictional parameters, which
Dynamic models of an earthquake and tsunami offshore Ventura, California
Kenny J. Ryan; Geist, Eric L.; Barall, Michael; David D. Oglesby
2015-01-01
The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.
Slimplectic Integrators: Variational Integrators for Nonconservative systems
NASA Astrophysics Data System (ADS)
Tsang, David
2016-05-01
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. Here we present the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to a newly developed principle of stationary nonconservative action (Galley, 2013, Galley et al 2014). As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
Slimplectic Integrators: Variational Integrators for Nonconservative systems
NASA Astrophysics Data System (ADS)
Tsang, David
2016-01-01
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the "slimplectic" integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting-Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
Stochastic-Dynamic Earthquake Models and Tsunami Generation
NASA Astrophysics Data System (ADS)
Oglesby, D. D.; Geist, E. L.
2013-12-01
Dynamic models are now understood to provide physically plausible faulting scenarios for ground motion prediction, but their use in tsunami hazard analysis is in its infancy. Typical tsunami model generation methods rely on kinematic or dislocation models of the earthquake source, in which the seismic moment, rupture path, and slip distribution are assumed a priori, typically based on models of prior earthquakes, aftershock distributions, and/or some sort of stochastic slip model. However, such models are not guaranteed to be consistent with any physically plausible faulting scenario and may span a range of parameter space far outside of what is physically realistic. In contrast, in dynamic models the earthquake rupture and slip process (including the final size of the earthquake, the spatiotemporal evolution of slip, and the rupture path on complex fault geometry) are calculated results of the models. Utilizing the finite element method, a self-affine stochastic stress field, and a shallow-water hydrodynamic code, we calculate a suite of dynamic slip models and near-source tsunamis from a megathrust/splay fault system motivated by the geometry in the Nankai region of Japan. Different stress realizations produce different spatial patterns of slip, including different partitioning between the megathrust and splay segments. Because the final moment from different stress realizations can differ, and because partitioning of slip between fault segments has a first-order effect on the surface deformation and tsunami generation, the modeled near-source tsunamis are also highly variable. Models whose stress amplitudes have been scaled to produce equivalent seismic moments (but with the same spatial variability and relative fault strength as the previous unscaled models) have less variability in tsunami amplitude in regions far from the fault, but greater variability in amplitude in the near-fault region.
Hybrid Modelling of the Economical Consequences of Extreme Magnitude Earthquakes
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.
2013-05-01
A hybrid modelling methodology is proposed to estimate the probability of exceedance of the intensities of extreme magnitude earthquakes (PEI) and of their direct economical consequences (PEDEC). The hybrid modeling uses 3D seismic wave propagation (3DWP) combined with empirical Green function (EGF) and Neural Network (NN) techniques in order to estimate the seismic hazard (PEIs) of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. The methodology is validated for Mw 8 magnitude subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican Pacific Coast. The results obtained with the proposed methodology, such as those of the PEDECs in terms of the joint event "damage Cost (C) - maximum ground intensities", of the conditional return period of C given that the maximum intensity exceeds a certain value, could be used by decision makers to allocate funds or to implement policies, to mitigate the impact associated to the plausible occurrence of future extreme magnitude earthquakes.
Non-conservative mass transfers in Algols
NASA Astrophysics Data System (ADS)
Erdem, A.; Öztürk, O.
2014-06-01
We applied a revised model for non-conservative mass transfer in semi-detached binaries to 18 Algol-type binaries showing orbital period increase or decrease in their parabolic O-C diagrams. The combined effect of mass transfer and magnetic braking due to stellar wind was considered when interpreting the orbital period changes of these 18 Algols. Mass transfer was found to be the dominant mechanism for the increase in orbital period of 10 Algols (AM Aur, RX Cas, DK Peg, RV Per, WX Sgr, RZ Sct, BS Sct, W Ser, BD Vir, XZ Vul) while magnetic braking appears to be the responsible mechanism for the decrease in that of 8 Algols (FK Aql, S Cnc, RU Cnc, TU Cnc, SX Cas, TW Cas, V548 Cyg, RY Gem). The peculiar behaviour of orbital period changes in three W Ser-type binary systems (W Ser, itself a prototype, RX Cas and SX Cas) is discussed. The empirical linear relation between orbital period (P) and its rate of change (dP/dt) was also revised.
Numerical Modeling and Forecasting of Strong Sumatra Earthquakes
NASA Astrophysics Data System (ADS)
Xing, H. L.; Yin, C.
2007-12-01
ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.
NASA Astrophysics Data System (ADS)
Setyonegoro, W.
2016-05-01
Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.
Combined GPS and InSAR models of postseismic deformation from the Northridge Earthquake
NASA Technical Reports Server (NTRS)
Donnellan, A.; Parker, J. W.; Peltzer, G.
2002-01-01
Models of combined Global Positioning System and Interferometric Synthetic Aperture Radar data collected in the region of the Northridge earthquake indicate that significant afterslip on the main fault occurred following the earthquake.
Understanding earthquake source processes with spatial random field models
NASA Astrophysics Data System (ADS)
Song, S.
2011-12-01
Earthquake rupture is a complex mechanical process that can be formulated as a dynamically running shear crack on a frictional interface embedded in an elastic continuum. This type of dynamic description of earthquake rupture is often preferred among researchers because they believe the kinematic description is likely to miss physical constraints introduced by dynamic approaches and to lead to arbitrary and nonphysical kinematic fault motions. However, dynamic rupture modeling, although they produce physically consistent models, often uses arbitrary input parameters, e.g., stress and fracture energy, partially because they are more difficult to constrain with data compared to kinematic ones. I propose to describe earthquake rupture as a stochastic model with a set of random variables (e.g., random field) that represent the spatial distribution of kinematic source parameters such as slip, rupture velocity, slip duration and velocity. This is a kinematic description of earthquake rupture in the sense that a model is formulated with kinematic parameters, but since the model can be constrained by both rupture dynamics and data, it may have both physical and observational constraints inside. The stochastic model is formulated by quantifying the 1-point and 2-point statistics of the kinematic parameters. 1-point statistics define a marginal probability density function for a certain source parameter at a given point on a fault. For example, a probability distribution for earthquake slip at a given point can control a possible range of values taken by earthquake slip and their likelihood. In the same way, we can control the existence of supershear rupture with a 1-point variability of the rupture velocity. Two point statistics, i.e. auto- and cross-coherence between source parameters, control the heterogeneity of each source parameter and their coupling, respectively. Several interesting features of earthquake rupture have been found by investigating cross
NASA Astrophysics Data System (ADS)
Zschau, J.
2009-04-01
Earthquake risk, like natural risks in general, has become a highly dynamic and globally interdependent phenomenon. Due to the "urban explosion" in the Third World, an increasingly complex cross linking of critical infrastructure and lifelines in the industrial nations and a growing globalisation of the world's economies, we are presently facing a dramatic increase of our society's vulnerability to earthquakes in practically all seismic regions on our globe. Such fast and global changes cannot be captured with conventional earthquake risk models anymore. The sciences in this field are, therefore, asked to come up with new solutions that are no longer exclusively aiming at the best possible quantification of the present risks but also keep an eye on their changes with time and allow to project these into the future. This does not apply to the vulnerablity component of earthquake risk alone, but also to its hazard component which has been realized to be time-dependent, too. The challenges of earthquake risk dynamics and -globalisation have recently been accepted by the Global Science Forum of the Organisation for Economic Co-operation and Development (OECD - GSF) who initiated the "Global Earthquake Model (GEM)", a public-private partnership for establishing an independent standard to calculate, monitor and communicate earthquake risk globally, raise awareness and promote mitigation.
Physical model for earthquakes, 1. Fluctuations and interactions
Rundle, J.B.
1988-06-10
This is the first of a series of papers whose purpose is to develop the apparatus needed to understand the problem of earthquake occurrence in a more physical context than has often been the case. To begin, it is necessary to introduce the idea that earthquakes represent a fluctuation about the long-term motion of the plates. This idea is made mathematically explicit by the introduction of a concept called the fluctuation hypothesis. Under this hypothesis, all physical quantities which pertain to the occurrence of earthquakes are required to depend on a physically meaningful quantity called the offset phase, the difference between the present state of slip on the fault and its long-term average. For the mathematical treatment of the fluctuation problem it is most convenient to introduce a spatial averaging, or ''coarse-graining'' operation, dividing the fault plane into a lattice of N patches. In this way, integrals are replaced by sums, and differential equations are replaced by algebraic equations. As a result of these operations the physics of earthquake occurrence can be stated in terms of a physically meaningful energy functional: an ''external potential'' W/sub E/. W/sub E/ is a functional potential for the stress on the fault plane acting from the external medium and characterizes the energy contained within the medium external to the fault plane which is available to produce earthquakes. A simple example is discussed which involves the dynamics of a one-dimensional fault model. To gain some understanding, a simple friction law and a failure algorithm are assumed. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state.
Physical model for earthquakes, 2. Application to southern California
Rundle, J.B.
1988-06-10
The purpose of this paper is to apply ideas developed in a previous paper to the construction of a detailed model for earthquake dynamics in southern California. The basis upon which the approach is formulated is that earthquakes are perturbations on, or more specifically fluctuations about, the long-term motions of the plates. This concept is made mathematically precise by means of a ''fluctuation hypothesis,'' which states that all physical quantities associated with earthquakes can be expressed as integral expansions in a fluctuating quantity called the ''offset phase.'' While in general, the frictional stick-slip properties of the complex, interacting faults should properly come out of the underlying physics, a simplification is made here, and a simple, spatially varying friction law is assumed. Together with the complex geometry of the major active faults, an assumed, spatially varying Earth rheology, the average rates of long-term offsets on all the major faults, and the friction coefficients, one can generate synthetic earthquake histories for comparison to the real data.
Ionosphere TEC disturbances before strong earthquakes: observations, physics, modeling (Invited)
NASA Astrophysics Data System (ADS)
Namgaladze, A. A.
2013-12-01
The phenomenon of the pre-earthquake ionospheric disturbances is discussed. A number of typical TEC (Total Electron Content) relative disturbances is presented for several recent strong earthquakes occurred in different ionospheric conditions. Stable typical TEC deviations from quiet background state are observed few days before the strong seismic events in the vicinity of the earthquake epicenter and treated as ionospheric earthquake precursors. They don't move away from the source in contrast to the disturbances related with geomagnetic activity. Sunlit ionosphere approach leads to reduction of the disturbances up to their full disappearance, and effects regenerate at night. The TEC disturbances often observed in the magnetically conjugated areas as well. At low latitudes they accompany with equatorial anomaly modifications. The hypothesis about the electromagnetic channel of the pre-earthquake ionospheric disturbances' creation is discussed. The lithosphere and ionosphere are coupled by the vertical external electric currents as a result of ionization of the near-Earth air layer and vertical transport of the charged particles through the atmosphere over the fault. The external electric current densities exceeding the regular fair-weather electric currents by several orders are required to produce stable long-living seismogenic electric fields such as observed by onboard measurements of the 'Intercosmos-Bulgaria 1300' satellite over the seismic active zones. The numerical calculation results using the Upper Atmosphere Model demonstrate the ability of the external electric currents with the densities of 10-8-10-9 A/m2 to produce such electric fields. The sumulations reproduce the basic features of typical pre-earthquake TEC relative disturbances. It is shown that the plasma ExB drift under the action of the seismogenic electric field leads to the changes of the F2 region electron number density and TEC. The upward drift velocity component enhances NmF2 and TEC and
The Common Forces: Conservative or Nonconservative?
ERIC Educational Resources Information Center
Keeports, David
2006-01-01
Of the forces commonly encountered when solving problems in Newtonian mechanics, introductory texts usually limit illustrations of the definitions of conservative and nonconservative forces to gravity, spring forces, kinetic friction and fluid resistance. However, at the expense of very little class time, the question of whether each of the common…
Desk-top model buildings for dynamic earthquake response demonstrations
Brady, A. Gerald
1992-01-01
Models of buildings that illustrate dynamic resonance behavior when excited by hand are designed and built. Two types of buildings are considered, one with columns stronger than floors, the other with columns weaker than floors. Combinations and variations of these two types are possible. Floor masses and column stiffnesses are chosen in order that the frequency of the second mode is approximately five cycles per second, so that first and second modes can be excited manually. The models are expected to be resonated by hand by schoolchildren or persons unfamiliar with the dynamic resonant response of tall buildings, to gain an understanding of structural behavior during earthquakes. Among other things, this experience will develop a level of confidence in the builder and experimenter should they be in a high-rise building during an earthquake, sensing both these resonances and other violent shaking.
NASA Astrophysics Data System (ADS)
Ito, Keisuke
1995-09-01
Bak and Sneppen proposed a self-organized-criticality model to explain the punctuated equilibrium of biological evolution. The model, as it is, is a good self-organized-criticality model of earthquakes. Real earthquakes satisfy the required conditions of criticality; that is, power laws in (1) the size distribution of earthquakes, and (2) both the spatial and the temporal correlation functions.
Parity nonconservation in the hydrogen atom
Chupp, T.E.
1983-01-01
The development of experiments to detect parity nonconserving (PNC) mixing of the 2s/sub 1///sub 2/ and 2p/sub 1///sub 2/ levels of the hydrogen atom in a 570 Gauss magnetic field is described. The technique involves observation of an asymmetry in the rate of microwave induced transitions at 1608 MHz due to the interference of two amplitudes, one produced by applied microwave and static electric fields and the other produced by an applied microwave field and the 2s/sub 1///sub 2/ - 2p/sub 1///sub 2/ mixing induced by a PNC Hamiltonian. These investigations, underway since 1977, have led to an experiment in which the two amplitudes are produced in two independently phased microwave cavities. The apparatus has the great advantage that all applied fields are cylindrically symmetric, thus false PNC effects can be generated only by departures from cylindrical symmetry which enter as the product of two small misalignment angles. The apparatus also has great diagnostic power since the sectioned microwave cavities can be used to produce static electric fields over short, well localized regions of space. This permits alignment of the apparatus and provides a sensitive probe of cylindrical symmetry. A phase regulation loop greatly reduces phase noise due to instabilities of the magnetic field, microwave generators, and resonant cavities. A preliminary measurement following alignment of the apparatus sets an upper limit of 575 on the parameter C/sub 2/p, which gives the strength of the PNC-induced mixing of the ..beta../sub 0/ (2s/sub 1///sub 2/) and e/sub 0/ (2p/sub 1///sub 2/) states. The prediction of the standard model, including radiative corrections, is C/sub 2/p = 0.08 +/- 0.037.
NASA Astrophysics Data System (ADS)
Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.
2014-12-01
Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically
Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps
NASA Astrophysics Data System (ADS)
Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.
2014-12-01
Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.
Theory and application of experimental model analysis in earthquake engineering
NASA Astrophysics Data System (ADS)
Moncarz, P. D.
The feasibility and limitations of small-scale model studies in earthquake engineering research and practice is considered with emphasis on dynamic modeling theory, a study of the mechanical properties of model materials, the development of suitable model construction techniques and an evaluation of the accuracy of prototype response prediction through model case studies on components and simple steel and reinforced concrete structures. It is demonstrated that model analysis can be used in many cases to obtain quantitative information on the seismic behavior of complex structures which cannot be analyzed confidently by conventional techniques. Methodologies for model testing and response evaluation are developed in the project and applications of model analysis in seismic response studies on various types of civil engineering structures (buildings, bridges, dams, etc.) are evaluated.
Nonconservative dynamics of optically trapped high-aspect-ratio nanowires
NASA Astrophysics Data System (ADS)
Toe, Wen Jun; Ortega-Piwonka, Ignacio; Angstmann, Christopher N.; Gao, Qiang; Tan, Hark Hoe; Jagadish, Chennupati; Henry, Bruce I.; Reece, Peter J.
2016-02-01
We investigate the dynamics of high-aspect-ratio nanowires trapped axially in a single gradient force optical tweezers. A power spectrum analysis of the dynamics reveals a broad spectral resonance of the order of kHz with peak properties that are strongly dependent on the input trapping power. A dynamical model incorporating linear restoring optical forces, a nonconservative asymmetric coupling between translational and rotational degrees of freedom, viscous drag, and white noise provides an excellent fit to experimental observations. A persistent low-frequency cyclical motion around the equilibrium trapping position, with a frequency distinct from the spectral resonance, is observed from the time series data.
Renormalized dissipation in the nonconservatively forced Burgers equation
Krommes, J.A.
2000-01-19
A previous calculation of the renormalized dissipation in the nonconservatively forced one-dimensional Burgers equation, which encountered a catastrophic long-wavelength divergence approximately [k min]-3, is reconsidered. In the absence of velocity shear, analysis of the eddy-damped quasi-normal Markovian closure predicts only a benign logarithmic dependence on kmin. The original divergence is traced to an inconsistent resonance-broadening type of diffusive approximation, which fails in the present problem. Ballistic scaling of renormalized pulses is retained, but such scaling does not, by itself, imply a paradigm of self-organized criticality. An improved scaling formula for a model with velocity shear is also given.
Modeling warning times for the Israel's earthquake early warning system
NASA Astrophysics Data System (ADS)
Pinsky, Vladimir
2015-01-01
In June 2012, the Israeli government approved the offer of the creation of an earthquake early warning system (EEWS) that would provide timely alarms for schools and colleges in Israel. A network configuration was chosen, consisting of a staggered line of ˜100 stations along the main regional faults: the Dead Sea fault and the Carmel fault, and an additional ˜40 stations spread more or less evenly over the country. A hybrid approach to the EEWS alarm was suggested, where a P-wave-based system will be combined with the S-threshold method. The former utilizes first arrivals to several stations closest to the event for prompt location and determination of the earthquake's magnitude from the first 3 s of the waveform data. The latter issues alarms, when the acceleration of the surface movement exceeds a threshold for at least two neighboring stations. The threshold will be chosen to be a peak acceleration level corresponding to a magnitude 5 earthquake at a short distance range (5-10 km). The warning times or lead times, i.e., times between the alarm signal arrival and arrival of the damaging S-waves, are considered for the P, S, and hybrid EEWS methods. For each of the approaches, the P- and the S-wave travel times and the alarm times were calculated using a standard 1D velocity model and some assumptions regarding the EEWS data latencies. Then, a definition of alarm effectiveness was introduced as a measure of the trade-off between the warning time and the shaking intensity. A number of strong earthquake scenarios, together with anticipated shaking intensities at important targets, namely cities with high populations, are considered. The scenarios demonstrated in probabilistic terms how the alarm effectiveness varies depending on the target distance from the epicenter and event magnitude.
Modeling warning times for the Israel's earthquake early warning system
NASA Astrophysics Data System (ADS)
Pinsky, Vladimir
2014-09-01
In June 2012, the Israeli government approved the offer of the creation of an earthquake early warning system (EEWS) that would provide timely alarms for schools and colleges in Israel. A network configuration was chosen, consisting of a staggered line of ˜100 stations along the main regional faults: the Dead Sea fault and the Carmel fault, and an additional ˜40 stations spread more or less evenly over the country. A hybrid approach to the EEWS alarm was suggested, where a P-wave-based system will be combined with the S-threshold method. The former utilizes first arrivals to several stations closest to the event for prompt location and determination of the earthquake's magnitude from the first 3 s of the waveform data. The latter issues alarms, when the acceleration of the surface movement exceeds a threshold for at least two neighboring stations. The threshold will be chosen to be a peak acceleration level corresponding to a magnitude 5 earthquake at a short distance range (5-10 km). The warning times or lead times, i.e., times between the alarm signal arrival and arrival of the damaging S-waves, are considered for the P, S, and hybrid EEWS methods. For each of the approaches, the P- and the S-wave travel times and the alarm times were calculated using a standard 1D velocity model and some assumptions regarding the EEWS data latencies. Then, a definition of alarm effectiveness was introduced as a measure of the trade-off between the warning time and the shaking intensity. A number of strong earthquake scenarios, together with anticipated shaking intensities at important targets, namely cities with high populations, are considered. The scenarios demonstrated in probabilistic terms how the alarm effectiveness varies depending on the target distance from the epicenter and event magnitude.
Modeling of regional earthquakes, aseismic deformation and fault patterns
NASA Astrophysics Data System (ADS)
Lyakhovsky, V.; Ben-Zion, Y.
2005-12-01
We study the coupled evolution of earthquakes and faults in a 3-D lithospheric model consisting of a weak sedimentary layer over a crystalline crust and upper mantle. The total strain tensor in each layer is the sum of (1) elastic strain, (2) damage-related inelastic strain, and (3) ductile strain. We use a visco-elastic damage rheology model (Lyakhovsky et al., 1997; Hamiel et al., 2004) to calculate elastic strain coupled with evolving material damage and damage-related inelastic strain accumulation. A thermodynamically based equation for damage evolution accounts for degradation and healing as a function of the elastic strain tensor and material properties (rate coefficients and ratio of strain invariants separating states of degradation and healing). Analyses of stress-strain, acoustic emission and frictional data provide constraints on the damage model parameters. The ductile strain in the sedimentary layer is governed by Newtonian viscosity, while power-law rheology is used for the ductile strain in the lower crust and upper mantle. Each mechanism of strain and damage evolution is associated with its own timescale. In our previous study of earthquakes and faults in a 2-D model with averaged stress distribution over the seismogenic zone (thin sheet approximation) we demonstrated effects associated with the ratio between time scales for damage healing and for tectonic loading. The results indicated that low ratio leads to the development of geometrically regular fault systems and the characteristic frequency-size earthquake statistics, while high ratio leads to the development of a network of disordered fault systems and the Gutenberg-Richter statistics. Stress relaxation through ductile creep and damage-related strain mechanisms is associated with two additional time scales. In contrast to the previous 2-D model, the thickness of the seismogenic zone is not prescribed by the model set-up, but is a function of the ratio between timescale of damage accumulation
Comparison of Short-term and Long-term Earthquake Forecast Models for Southern California
NASA Astrophysics Data System (ADS)
Helmstetter, A.; Kagan, Y. Y.; Jackson, D. D.
2004-12-01
Many earthquakes are triggered in part by preceding events. Aftershocks are the most obvious examples, but many large earthquakes are preceded by smaller ones. The large fluctuations of seismicity rate due to earthquake interactions thus provide a way to improve earthquake forecasting significantly. We have developed a model to estimate daily earthquake probabilities in Southern California, using the Epidemic Type Earthquake Sequence model [Kagan and Knopoff, 1987; Ogata, 1988]. The forecasted seismicity rate is the sum of a constant external loading and of the aftershocks of all past earthquakes. The background rate is estimated by smoothing past seismicity. Each earthquake triggers aftershocks with a rate that increases exponentially with its magnitude and which decreases with time following Omori's law. We use an isotropic kernel to model the spatial distribution of aftershocks for small (M≤5.5) mainshocks, and a smoothing of the location of early aftershocks for larger mainshocks. The model also assumes that all earthquake magnitudes follow the Gutenberg-Richter law with a unifom b-value. We use a maximum likelihood method to estimate the model parameters and tests the short-term and long-term forecasts. A retrospective test using a daily update of the forecasts between 1985/1/1 and 2004/3/10 shows that the short-term model decreases the uncertainty of an earthquake occurrence by a factor of about 10.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides
NASA Astrophysics Data System (ADS)
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Recurrence time distributions of large earthquakes in conceptual model studies
NASA Astrophysics Data System (ADS)
Zoeller, G.; Hainzl, S.
2007-12-01
The recurrence time distribution of large earthquakes in seismically active regions is a crucial ingredient for seismic hazard assessment. However, due to sparse observational data and a lack of knowledge on the precise mechanisms controlling seismicity, this distribution is unknown. In many practical applications of seismic hazard assessment, the Brownian passage time (BPT) distribution (or a different distribution) is fitted to a small number of observational recurrence times. Here, we study various aspects of recurrence time distributions in conceptual models of individual faults and fault networks: First, the dependence of the recurrence time distribution on the fault interaction is investigated by means of a network of Brownian relaxation oscillators. Second, the Brownian relaxation oscillator is modified towards a model for large earthquakes, taking into account also the statistics of intermediate events in a more appropriate way. This model simulates seismicity in a fault zone consisting of a major fault and some surrounding smaller faults with Gutenberg-Richter type seismicity. This model can be used for more realistic and robust estimations of the real recurrence time distribution in seismic hazard assessment.
NASA Astrophysics Data System (ADS)
Urrutia, J. D.; Bautista, L. A.; Baccay, E. B.
2014-04-01
The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. The mathematical models made are as follows: This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reductionin the Philippines, which is an earthquake-prone country.
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.
2015-12-01
Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity
Newmark displacement model for landslides induced by the 2013 Ms 7.0 Lushan earthquake, China
NASA Astrophysics Data System (ADS)
Yuan, Renmao; Deng, Qinghai; Cunningham, Dickson; Han, Zhujun; Zhang, Dongli; Zhang, Bingliang
2016-01-01
Predicting approximate earthquake-induced landslide displacements is helpful for assessing earthquake hazards and designing slopes to withstand future earthquake shaking. In this work, the basic methodology outlined by Jibson (1993) is applied to derive the Newmark displacement of landslides based on strong ground-motion recordings during the 2013 Lushan Ms 7.0 earthquake. By analyzing the relationships between Arias intensity, Newmark displacement, and critical acceleration of the Lushan earthquake, formulas of the Jibson93 and its modified models are shown to be applicable to the Lushan earthquake dataset. Different empirical equations with new fitting coefficients for estimating Newmark displacement are then developed for comparative analysis. The results indicate that a modified model has a better goodness of fit and a smaller estimation error for the Jibson93 formula. It indicates that the modified model may be more reasonable for the dataset of the Lushan earthquake. The analysis of results also suggests that a global equation is not ideally suited to directly estimate the Newmark displacements of landslides induced by one specific earthquake. Rather it is empirically better to perform a new multivariate regression analysis to derive new coefficients for the global equation using the dataset of the specific earthquake. The results presented in this paper can be applied to a future co-seismic landslide hazard assessment to inform reconstruction efforts in the area affected by the 2013 Lushan Ms 7.0 earthquake, and for future disaster prevention and mitigation.
Sahoo, Bijaya K.; Chaudhuri, Rajat; Das, B. P.; Mukherjee, Debashis
2006-04-28
We report the result of our ab initio calculation of the 6s{sup 2}S{sub 1/2}{yields}5d{sup 2}D{sub 3/2} parity nonconserving electric dipole transition amplitude in {sup 137}Ba{sup +} based on relativistic coupled-cluster theory. Considering single, double, and partial triple excitations, we have achieved an accuracy of less than 1%. If the accuracy of our calculation can be matched by the proposed parity nonconservation experiment in Ba{sup +} for the above transition, then the combination of the two results would provide an independent nonaccelerator test of the standard model of particle physics.
VLF subionospheric disturbances associated with earthquakes: Observations and numerical modeling
NASA Astrophysics Data System (ADS)
Hobara, Y.; Iwamoto, M.; Ohta, K.; Hayakawa, M.
2011-12-01
Recently many experimental results have been reported concerning the ionospheric perturbation associated with major earthquakes. VLF/LF transmitter signal received by network observations are used to detect seismo-ionospheric signatures such as amplitude and phase anomalies. These signatures are due to the ionospheric perturbation located around the transmitter and receivers. However the physical properties of the perturbation such as electron density, spatial scale, and location have not been understood well. In this paper we performed the numerical modeling of the subionosperic VLF/LF signals including the various conditions of seismo-ionospheric perturbations by using a two-dimensional finite-difference time-domain (FDTD) method to determine the perturbation properties. The amplitude and phase for the various cases of an ionospheric perturbation are calculated relative to the normal condition (without perturbation) as functions of distance from the transmitter and distance between the transmitter and perturbation. These numerical results are compared with our observation. As a result, we found that the received transmitter amplitude depends greatly on the distance between the transmitter and ionopsheric perturbation, on the spaticl scale and height of the perturbations. Moreover results of modeled ionospheric perturbation for the recent 2011 off the pacific coast of Tohoku earthquake are compared with those from our VLF network experiment.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Huang, Hsin-Hua; Shyu, J. Bruce H.; Yeh, Te-Yang; Lin, Tzu-Chi
2014-12-01
We build a numerical earthquake model, including numerical source and wave propagation models, to understand the rupture process and the ground motion time history of the 2013 ML 6.4 Ruisui earthquake in Taiwan. This moderately large event was located in the Longitudinal Valley, a suture zone of the Philippine Sea Plate and the Eurasia Plate. A joint source inversion analysis by using teleseismic body wave, GPS coseismic displacement and near field ground motion data was performed first. The inversion results derived from a western dipping fault plane indicate that the slip occurred in depths between 10 and 20 km. The rupture propagated from south to north and two asperities were resolved. The largest one was located approximately 15 km north of the epicenter with a maximum slip about 1 m. A 3D seismic wave propagation simulation based on the spectral-element method was then carried out by using the inverted source model. A strong rupture directivity effect in the northern area of the Longitudinal Valley was found, which was due to the northward rupture process. Forward synthetic waveforms could explain most of the near-field ground motion data for frequencies between 0.05 and 0.2 Hz. This numerical earthquake model not only helps us confirm the detailed rupture processes on the Central Range Fault but also gives contribution to regional seismic hazard mitigation for future large earthquakes.
NASA Astrophysics Data System (ADS)
Lee, S. J.; Huang, H. H.; Shyu, J. B. H.; Lin, T. C.; Yeh, T. Y.
2014-12-01
We build a numerical earthquake model, including numerical source and wave propagation models, to understand the rupture process and the ground motion time history of the 2013 ML 6.4 Ruisui earthquake in Taiwan. This moderately large event was located in the Longitudinal Valley, a suture zone of the Philippine Sea Plate and the Eurasia Plate. A joint source inversion analysis by using teleseismic body wave, GPS coseismic displacement and near field ground motion data was performed first. The inversion results derived from a western dipping fault plane indicate that the slip occurred in depths between 10 and 20 km. The rupture propagated from south to north and caused two asperities. The largest one located approximately 15 km north of the epicenter with a maximum slip about 1 m. A 3D seismic wave propagation simulation based on the spectral-element method was then carried out by using the inverted source model. A strong rupture directivity effect in the northern area of the Longitudinal Valley was found, which was due to the northward rupture process. Forward synthetic waveforms could explain most of the near-field ground motion data for frequencies between 0.05 and 0.2 Hz. This numerical earthquake model not only helps us confirm the detailed rupture processes on the Central Range Fault but also gives contribution to regional seismic hazard mitigation for future large earthquakes.
NASA Astrophysics Data System (ADS)
Hovius, Niels; Marc, Odin; Meunier, Patrick
2015-04-01
Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new, seismologically consistent model of earthquake induced landsliding allows us to explore the importance of parameters such as earthquake depth and landscape steepness. In order to assess the earthquake mass balance for various scenarios, we have compared the expected eroded volume with co-seismic surface uplift computed with Okada's deformation theory. We have found the earthquake depth and landscape steepness to be dominant parameters compared to the fault geometry (dip and rake). In contrast with previous studies we have found that the largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. We have explored the long term evolution of topography under seismic forcing, with a Gutenberg Richter distribution or a characteristic earthquake model, on a fault system with different geometries and tectonic styles, such as transpressive or flat-and-ramp geometry, with thinned or thickened seismogenic layer.
The Global Earthquake Model and Disaster Risk Reduction
NASA Astrophysics Data System (ADS)
Smolka, A. J.
2015-12-01
Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all
A Hidden Markov Approach to Modeling Interevent Earthquake Times
NASA Astrophysics Data System (ADS)
Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.
2003-12-01
A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
Short-term forecasting of Taiwanese earthquakes using a universal model of fusion-fission processes.
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M A; Johnson, Neil F
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Parity nonconservation in the hydrogen atom
Chupp, T.E.
1983-01-01
The development of experiments to detect parity nonconserving (PNC) mixing of the 2s/sub a/2/ and 2p/sub 1/2/ levels of the hydrogen atom in a 570 Gauss magnetic field is described. The technique involves observation of an asymmetry in the rate of microwave induced transitions at 1608 MHz due to the interference of two amplitudes, one produced by applied microwave and static electric fields and the other produced by an applied microwave field and the 2s/sub 1/2/-2p/sub 1/2/ mixing inducd by a PNC Hamiltonian.
Phase response curves for models of earthquake fault dynamics
NASA Astrophysics Data System (ADS)
Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen
2016-06-01
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.
Phase response curves for models of earthquake fault dynamics.
Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen
2016-06-01
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period. PMID:27368770
Regional Earthquake Likelihood Models: A realm on shaky grounds?
NASA Astrophysics Data System (ADS)
Kossobokov, V.
2005-12-01
Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2010-12-01
We show results of an ongoing project that aims at better understanding the earthquake interactions within fault networks in order to improve seismic hazard estimations with application to the test region of the Lower Rhine Embayment (Germany). Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, physics-based fault models would also allow for physical interpretations of the described seismicity. At first we present a comparison of two different quasi-dynamic earthquake simulators for a setup of two parallel strike-slip faults. The first fault model is based on a simple quasi-static approach with coulomb-failure criterion (Ben-Zion and Rice (1993)), but with the extension of having a finite communication speed of stress-transfer as introduced by Zöller (2004). The finite communication speed introduces a time scale and therefore makes the model quasi-dynamic. The second fault model is based on rate- and state-dependent friction as introduced by Dietrich (1995). We compare the spatio-temporal behavior of both models. In a second step we show results for a setup of a graben-structure (two parallel normal faults). And finally we show characteristics of a fault system located in the Lower Rhine Embayment using the rate- and state-model. We also test the ability of statistical recurrence time models (Brownian relaxation oscillators and stress release models) to capture the first-order characteristics of the earthquake simulations.
A nonconservative scheme for isentropic gas dynamics
Chen, Gui-Qiang |; Liu, Jian-Guo
1994-05-01
In this paper, we construct a second-order nonconservative for the system of isentropic gas dynamics to capture the physical invariant regions for preventing negative density, to treat the vacuum singularity, and to control the local entropy from dramatically increasing near shock waves. The main difference in the construction of the scheme discussed here is that we use piecewise linear functions to approximate the Riemann invariants w and z instead of the physical variables {rho} and m. Our scheme is a natural extension of the schemes for scalar conservation laws and it can be numerical implemented easily because the system is diagonalized in this coordinate system. Another advantage of using Riemann invariants is that the Hessian matrix of any weak entropy has no singularity in the Riemann invariant plane w-z, whereas the Hessian matrices of the weak entropies have singularity at the vacuum points in the physical plane p-m. We prove that this scheme converges to an entropy solution for the Cauchy problem with L{sup {infinity}} initial data. By convergence here we mean that there is a subsequent convergence to a generalized solution satisfying the entrophy condition. As long as the entropy solution is unique, the whole sequence converges to a physical solution. This shows that this kind of scheme is quite reliable from theoretical view of point. In addition to being interested in the scheme itself, we wish to provide an approach to rigorously analyze nonconservative finite difference schemes.
A graph theoretic approach to global earthquake sequencing: A Markov chain model
NASA Astrophysics Data System (ADS)
Vasudevan, K.; Cavers, M. S.
2012-12-01
We construct a directed graph to represent a Markov chain of global earthquake sequences and analyze the statistics of transition probabilities linked to earthquake zones. For earthquake zonation, we consider the simplified plate boundary template of Kagan, Bird, and Jackson (KBJ template, 2010). We demonstrate the applicability of the directed graph approach to hazard-related forecasting using some of the properties of graphs that represent the finite Markov chain. We extend the present study to consider Bird's 52-plate zonation (2003) describing the global earthquakes at and within plate boundaries to gain further insight into the usefulness of digraphs corresponding to a Markov chain model.
NASA Astrophysics Data System (ADS)
Daniell, James
2010-05-01
This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure has been developed to provide a framework for optimisation of a Global Earthquake Modelling process through: 1) Overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost and technology); 2) Preliminary research, acquisition and familiarisation with all available ELE software packages; 3) Assessment of these 30+ software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4) Loss analysis for a deterministic earthquake (Mw7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment), a capacity spectrum based method HAZUS (HAZards United States) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach) software which was adapted for use in order to compare the different processes needed for the production of damage, economic and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data. Keywords: OPAL, displacement-based, DBELA, earthquake loss estimation, earthquake loss assessment, open source, HAZUS
Cooling magma model for deep volcanic long-period earthquakes
NASA Astrophysics Data System (ADS)
Aso, Naofumi; Tsai, Victor C.
2014-11-01
Deep long-period events (DLP events) or deep low-frequency earthquakes (deep LFEs) are deep earthquakes that radiate low-frequency seismic waves. While tectonic deep LFEs on plate boundaries are thought to be slip events, there have only been a limited number of studies on the physical mechanism of volcanic DLP events around the Moho (crust-mantle boundary) beneath volcanoes. One reasonable mechanism capable of producing their initial fractures is the effect of thermal stresses. Since ascending magma diapirs tend to stagnate near the Moho, where the vertical gradient of density is high, we suggest that cooling magma may play an important role in volcanic DLP event occurrence. Assuming an initial thermal perturbation of 400°C within a tabular magma of half width 41 m or a cylindrical magma of 74 m radius, thermal strain rates within the intruded magma are higher than tectonic strain rates of ~ 10-14 s-1 and produce a total strain of 2 × 10-4. Shear brittle fractures generated by the thermal strains can produce a compensated linear vector dipole mechanism as observed and potentially also explain the harmonic seismic waveforms from an excited resonance. In our model, we predict correlation between the particular shape of the cluster and the orientation of focal mechanisms, which is partly supported by observations of Aso and Ide (2014). To assess the generality of our cooling magma model as a cause for volcanic DLP events, additional work on relocations and focal mechanisms is essential and would be important to understanding the physical processes causing volcanic DLP events.
NASA Astrophysics Data System (ADS)
Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq
2015-12-01
We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.
Modeling subduction megathrust earthquakes: Insights from a visco-elasto-plastic analog model
NASA Astrophysics Data System (ADS)
Dominguez, Stéphane; Malavieille, Jacques; Mazzotti, Stéphane; Martin, Nicolas; Caniven, Yannick; Cattin, Rodolphe; Soliva, Roger; Peyret, Michel; Lallemand, Serge
2015-04-01
As illustrated recently by the 2004 Sumatra-Andaman or the 2011 Tohoku earthquakes, subduction megathrust earthquakes generate heavy economic and human losses. Better constrain how such destructive seismic events nucleate and generate crustal deformations represents a major societal issue but appears also as a difficult scientific challenge. Indeed, several limiting factors, related to the difficulty to analyze deformation undersea, to access deep source of earthquake and to integrate the characteristic time scales of seismic processes, must be overcome first. With this aim, we have developed an experimental approach to complement numerical modeling techniques that are classically used to analyze available geological and geophysical observations on subduction earthquakes. Objectives were to design a kinematically and mechanically first-order scaled analogue model of a subduction zone capable of reproducing megathrust earthquakes but also realistic seismic cycle deformation phases. The model rheology is based on multi-layered visco-elasto-plastic materials to take into account the mechanical behavior of the overriding lithospheric plate. The elastic deformation of the subducting oceanic plate is also simulated. The seismogenic zone is characterized by a frictional plate interface whose mechanical properties can be adjusted to modify seismic coupling. Preliminary results show that this subduction model succeeds in reproducing the main deformation phases associated to the seismic cycle (interseismic elastic loading, coseismic rupture and post-seismic relaxation). By studying model kinematics and mechanical behavior, we expect to improve our understanding of seismic deformation processes and better constrain the role of physical parameters (fault friction, rheology, ...) as well as boundary conditions (loading rate,...) on seismic cycle and megathrust earthquake dynamics. We expect that results of this project will lead to significant improvement on interpretation of
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
Finite-Source Modeling for Parkfield and Anza Earthquakes
NASA Astrophysics Data System (ADS)
Wooddell, K. E.; Taira, T.; Dreger, D. S.
2014-12-01
Repeating earthquakes occur in the vicinity of creeping sections along the Parkfield section of the San Andreas fault (Nadeau et al., 1995) and the Anza section of the San Jacinto fault (Taira, 2013). Uilizing an empirical Green's function (eGF) approach for both the Parkfield and Anza events, we are able to conduct a comparative study of the resulting slip distributions and source parameters to examine differences in the scaling of fault dimension, average slip, and peak-slip with magnitude. Following the approach of Dreger et al. (2007), moment rate functions (MRFs) are obtained at each station for both Parkfield and Anza earthquakes using a spectral domain deconvolution approach where the complex spectrum of the eGF is divided out of the complex spectrum of the target event. Spatial distributions of fault slip are derived by inverting the MRFs, and the coseismic stress change is computed following the method of Ripperger and Mai (2004). Initial results are based on the analysis of several Parkfield target events ranging in magnitude from Mw1.8 to 6.0 (Dreger et al., 2011) and a Mw4.7 Anza event. Parkfield peak slips are consistent with the Nadeau and Johnson (1998) tectonic loading model, while average slips tend to scale self-similarly. Results for the Anza event show very high peak and average slips, in exceedance of 50 cm and 10 cm respectively. Directivity for this event is in the northwest direction, and preliminary sensitivity analyses suggest that the rupture velocity is near the shear wave velocity and the rise time is short (~0.03 sec). Multiple eGFs for the Anza event have been evaluated and the results appear robust.
Earthquake Response Modeling for a Parked and Operating Megawatt-Scale Wind Turbine
Prowell, I.; Elgamal, A.; Romanowitz, H.; Duggan, J. E.; Jonkman, J.
2010-10-01
Demand parameters for turbines, such as tower moment demand, are primarily driven by wind excitation and dynamics associated with operation. For that purpose, computational simulation platforms have been developed, such as FAST, maintained by the National Renewable Energy Laboratory (NREL). For seismically active regions, building codes also require the consideration of earthquake loading. Historically, it has been common to use simple building code approaches to estimate the structural demand from earthquake shaking, as an independent loading scenario. Currently, International Electrotechnical Commission (IEC) design requirements include the consideration of earthquake shaking while the turbine is operating. Numerical and analytical tools used to consider earthquake loads for buildings and other static civil structures are not well suited for modeling simultaneous wind and earthquake excitation in conjunction with operational dynamics. Through the addition of seismic loading capabilities to FAST, it is possible to simulate earthquake shaking in the time domain, which allows consideration of non-linear effects such as structural nonlinearities, aerodynamic hysteresis, control system influence, and transients. This paper presents a FAST model of a modern 900-kW wind turbine, which is calibrated based on field vibration measurements. With this calibrated model, both coupled and uncoupled simulations are conducted looking at the structural demand for the turbine tower. Response is compared under the conditions of normal operation and potential emergency shutdown due the earthquake induced vibrations. The results highlight the availability of a numerical tool for conducting such studies, and provide insights into the combined wind-earthquake loading mechanism.
Parity nonconservation in radioactive atoms: An experimental perspective
Vieira, D.
1994-11-01
The measurement of parity nonconservation (PNC) in atoms constitutes an important test of electroweak interactions in nuclei. Great progress has been made over the last 20 years in performing these measurements with ever increasing accuracies. To date the experimental accuracies have reached a level of 1 to 2%. In all cases, except for cesium, the theoretical atomic structure uncertainties now limit the comparison of these measurements to the predictions of the standard model. New measurements involving the ratio of Stark interference transition rates for a series of Cs or Fr radioisotopes are foreseen as a way of eliminating these atomic structure uncertainties. The use of magneto-optical traps to collect and concentrate the much smaller number of radioactive atoms that are produced is considered to be one of the key steps in realizing these measurements. Plans for how these measurements will be done and progress made to date are outlined.
Acceleration modeling of moderate to large earthquakes based on realistic fault models
NASA Astrophysics Data System (ADS)
Arvidsson, R.; Toral, J.
2003-04-01
Strong motion is affected by distance to the earthquake, local crustal structure, focal mechanism, azimuth to the source. However, the faulting process is also of importance such as development of rupture, i.e., directivity, slip distribution on the fault, extent of fault, rupture velocity. We have modelled these parameters for earthquakes that occurred in three tectonic zones close to the Panama Canal. We included in the modeling directivity, distributed slip, discrete faulting, fault depth and expected focal mechanism. The distributed slip is based on previous fault models that we produced from the region of other earthquakes. Such previous examples show that maximum intensities in some cases coincides with areas of high slip on the fault. Our acceleration modeling also gives similar values to the few observations that have been made for moderate to small earthquakes in the range M=5-6.2. The modeling indicates that events located in the Caribbean might cause strong motion in the lower frequency spectra where high frequency Rayleigh waves dominates.
NASA Astrophysics Data System (ADS)
Esmer, Özcan
2006-11-01
This paper first evaluates the earthquake prediction method (1999 ) used by US Geological Survey as the lead example and reviews also the recent models. Secondly, points out the ongoing debate on the predictability of earthquake recurrences and lists the main claims of both sides. The traditional methods and the "frequentist" approach used in determining the earthquake probabilities cannot end the complaints that the earthquakes are unpredictable. It is argued that the prevailing "crisis" in seismic research corresponds to the Pre-Maxent Age of the current situation. The period of Kuhnian "Crisis" should give rise to a new paradigm based on the Information-Theoric framework including the inverse problem, Maxent and Bayesian methods. Paper aims to show that the information- theoric methods shall provide the required "Methodica Firma" for the earthquake prediction models.
NASA Astrophysics Data System (ADS)
Galvez, P.; Ampuero, J.-P.; Dalguer, L. A.; Somala, S. N.; Nissen-Meyer, T.
2014-08-01
An important goal of computational seismology is to simulate dynamic earthquake rupture and strong ground motion in realistic models that include crustal heterogeneities and complex fault geometries. To accomplish this, we incorporate dynamic rupture modelling capabilities in a spectral element solver on unstructured meshes, the 3-D open source code SPECFEM3D, and employ state-of-the-art software for the generation of unstructured meshes of hexahedral elements. These tools provide high flexibility in representing fault systems with complex geometries, including faults with branches and non-planar faults. The domain size is extended with progressive mesh coarsening to maintain an accurate resolution of the static field. Our implementation of dynamic rupture does not affect the parallel scalability of the code. We verify our implementation by comparing our results to those of two finite element codes on benchmark problems including branched faults. Finally, we present a preliminary dynamic rupture model of the 2011 Mw 9.0 Tohoku earthquake including a non-planar plate interface with heterogeneous frictional properties and initial stresses. Our simulation reproduces qualitatively the depth-dependent frequency content of the source and the large slip close to the trench observed for this earthquake.
NASA Astrophysics Data System (ADS)
Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.
2011-09-01
We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.
Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L
2007-02-09
We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
Mark, Robert K.
1977-01-01
Correlation or linear regression estimates of earthquake magnitude from data on historical magnitude and length of surface rupture should be based upon the correct regression. For example, the regression of magnitude on the logarithm of the length of surface rupture L can be used to estimate magnitude, but the regression of log L on magnitude cannot. Regression estimates are most probable values, and estimates of maximum values require consideration of one-sided confidence limits.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
Fault slip model of the historical 1797 earthquake on the Mentawai segment of the Sunda Megathrust
NASA Astrophysics Data System (ADS)
Lubis, A.; Hill, E. M.; Philibosian, B.; Meltzner, A. J.; Barbot, S.; Sieh, K. E.
2012-12-01
Paleogeodetic observations from coral reef studies have provided estimates of coseismic deformation associated with two great earthquakes on the Sunda megathrust, in 1797 and 1833. Since the corals die when they are uplifted, they do not record the full coseismic displacement. In previous work [Natawidjaja et al., 2006], co-seismic offsets were estimated by linear extrapolation of inter-seismic coral data. This meant that they ignored any post-seismic deformation, despite the fact that it could contribute significantly to displacement of the coral. Here, we use the earthquake cycle model (Sato et al., 2006) to estimate a slip distribution for the historical 1797 earthquake on the Mentawai segment. After calculating model parameters related to the earthquake cycle model, with an assumed earth structure, we use the ABIC inversion algorithm (Yabuki and Matsuura, 1992) to invert the coral datasets. We find that the slip distribution is concentrated in two main asperities, and it is consistent with the present coupling area (Chlieh et al. 2008). A significant slip patch of ~5 m is imaged at the northern end of the slip region of the September 2007 Mw 8.4 earthquake, and a second smaller asperity beneath Siberut island. Based on this model, the moment magnitude (Mw) for the 1797 earthquake is estimated to be 8.4. In order to validate our source model, we will use earthquake cycle 3D-FEM to reconstruct the displacement time series as measured by the corals.
Precise measurement of parity nonconservation in atomic thallium
Hunter, L.R.
1981-05-01
Observation of parity non-conservation in the 6P/sub 1/2/ - 7P/sub 1/2/ transition in /sub 81/Tl/sup 203/ /sup 205/ is reported. The transition is nominally forbidden M1 with amplitude M. Due to the violation of parity in the electron-nucleon interaction, the transition acquires an additional (parity nonconserving) amplitude e/sub p/. In the presence of an electric field, incident 293 nm circularly polarized light results in a polarization of the 7P/sub 1/2/ state through interference of the Stark amplitude with M and E/sub p/. This polarization is observed by selective excitation of the 7P/sub 1/2/ - (8S/sub 1/2) transition with circularly polarized 2.18 ..mu..m light and observation of the subsequent fluorescence at 323 nm. By utilizing this technique and carefully determining possible systematic contributions through auxiliary measurements, the circular dichroism delta = 2Im(E/sub p/)/ M is observed: delta/sub exp/ = (2.8 + 1.0 - .9) x 10/sup -3/. In addition, measurements of A(6D/sub 3/2/ - 7P/sub 1/2/) = (5.97 +- .78) x 10/sup 5/ s/sup -1/, A(7P/sub 1/2/ - 7S/sub 1/2/) = (1.71 +- .07) x 10/sup 7/ s/sup -1/ and A(7P/sub 3/2/ - 7S/sub 1/2/) = (2.37 +- .09) s/sup -1/ are reported. These values are employed in a semiempirical determination of delta based on the Weinberg-Salam Model. The result of this calculation for sin/sup 2/THETA/sub 2/ = .23 is delta/sub Theo/ = 1.7 +- .8) x 10/sup -3/.
Numerical modelling of iceberg calving force responsible for glacial earthquakes
NASA Astrophysics Data System (ADS)
Sergeant, Amandine; Yastrebov, Vladislav; Castelnau, Olivier; Mangeney, Anne; Stutzmann, Eleonore; Montagner, Jean-Paul
2016-04-01
Glacial earthquakes is a class of seismic events of magnitude up to 5, occurring primarily in Greenland, in the margins of large marine-terminated glaciers with near-grounded termini. They are caused by calving of cubic-kilometer scale unstable icebergs which penetrate the full-glacier thickness and, driven by the buoyancy forces, capsize against the calving front. These phenomena produce seismic energy including surface waves with dominant energy between 10-150 s of period whose seismogenic source is compatible with the contact force exerted on the terminus by the iceberg while it capsizes. A reverse motion and posterior rebound of the terminus have also been measured and associated with the fluctuation of this contact force. Using a finite element model of iceberg and glacier terminus coupled with simplified fluid-structure interaction model, we simulate calving and capsize of icebergs. Contact and frictional forces are measured on the terminus and compared with laboratory experiments. We also study the influence of geometric factors on the force history, amplitude and duration at the laboratory and field scales. We show first insights into the force and the generated seismic waves exploring different scenarios for iceberg capsizing.
Simulational Studies of a 2-DIMENSIONAL Burridge - Model for Earthquakes
NASA Astrophysics Data System (ADS)
Ross, John Bernard
1993-01-01
A two-dimensional cellular automaton version of the Burridge-Knopoff (BK) model for earthquakes is studied. The model consists of a lattice of blocks connected by springs, subject to static friction and driven at a rate v by an externally applied force. A block ruptures provided that its total stress matches or exceeds static friction. The distance it moves is proportional to the total stress, alpha of which it releases to each of its neighbors and 1 - qalpha<=aves the system, where q is the number of neighbors. The BK model with nearest neighbor (q = 4) and long range (q = 24) interactions is simulated for spatially uniform and random static friction on lattices with periodic, open, closed, and fixed boundary conditions. In the nearest neighbor model, the system appears to have a spinodal critical point at v = v_{c} in all cases except for closed boundaries and uniform thresholds, where the system appears to be self-organized critical. The dynamics of the model is always periodic or quasiperiodic for non-closed boundaries and uniform thresholds. The stress is "quantized" in multiples of the loader force in this case. A mean field theory is presented from which v _{c} and the dominant period of oscillation is derived, which agree well with the data. v_{c} varies inversely with the number of neighbors to which each blocks is attached and, as a result, goes to zero as the range of the springs goes to infinity. This is consistent with the behavior of a spinodal critical point as the range of interactions goes to infinity. The quasistatic limit of tectonic loading is thus recovered.
Use of GPS and InSAR Technology and its Further Development in Earthquake Modeling
NASA Technical Reports Server (NTRS)
Donnellan, A.; Lyzenga, G.; Argus, D.; Peltzer, G.; Parker, J.; Webb, F.; Heflin, M.; Zumberge, J.
1999-01-01
Global Positioning System (GPS) data are useful for understanding both interseismic and postseismic deformation. Models of GPS data suggest that the lower crust, lateral heterogeneity, and fault slip, all provide a role in the earthquake cycle.
Integrated seismic source model of the 2015 Gorkha, Nepal, earthquake
NASA Astrophysics Data System (ADS)
Yagi, Yuji; Okuwaki, Ryo
2015-08-01
We compared spatiotemporal slip-rate and high-frequency (around 1 Hz) radiation distributions from teleseismic P wave data to infer the seismic rupture process of the 2015 Gorkha, Nepal, earthquake. For these estimates, we applied a novel waveform inversion formulation that mitigates the effect of Green's functions uncertainty and a hybrid backprojection method that mitigates contamination by depth phases. Our model showed that the dynamic rupture front propagated eastward from the hypocenter at 3.0 km/s and triggered a large-slip event centered about 50 km to the east. It also showed that the large-slip event included a rapid rupture acceleration event and an irregular deceleration of rupture propagation before the rupture termination. Heterogeneity of the stress drop or fracture energy in the eastern part of the rupture area, where aftershock activity was high, inhibited rupture growth. High-frequency radiation sources tended to be in the deeper part of the large-slip area, which suggests that heterogeneity of the stress drop or fracture energy there may have contributed to the damage in and around Kathmandu.
So, Emily; Spence, Robin
2013-01-01
Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.
Modeling of electromagnetic E-layer waves before earthquakes
NASA Astrophysics Data System (ADS)
Meister, Claudia-Veronika; Hoffmann, Dieter H. H.
2013-04-01
A dielectric model for electromagnetic (EM) waves in the Earth's E-layer is developed. It is assumed that these waves are driven by acoustic-type waves, which are caused by earthquake precursors. The dynamics of the plasma system and the EM waves is described using the multi-component magnetohydrodynamic (MHD) theory. The acoustic waves are introduced as neutral gas wind. The momentum transfer between the charged particles in the MHD system is mainly caused via the collisions with the neutral gas. From the MHD system, relations for the velocity fluctuations of the particles are found, which consist of products of the electric field fluctuations times coefficients α which only depend on the plasma background parameters. A quick FORTRAN program is developed, to calculate these coefficients (solution of 9x9-matrix equations). Models of the altitudinal scales of the background plasma parameters and the fluctuations of the plasma parameters and the EM field are introduced. Besides, in case of the electric wave field, a method is obtained to calculate the altitudinal scale ? of the amplitude (based on the Poisson equation and knowing the coefficients α). Finally, a general dispersion relation is found, where α, ? and the altitudinal profile of ? appear as parameters (which were found in the numerical model before). Thus, the dispersion relations of EM waves caused by acoustic-type ones during times of seismic activity may be studied numerically. Besides, an expression for the related temperature fluctuations is derived, which depends on the dispersion of the excited EM waves, α, ? and the background plasma parameters. So, heating processes in the atmosphere may be investigated.
NASA Astrophysics Data System (ADS)
Furlong, Kevin P.; Govers, Rob; Herman, Matthew
2016-04-01
Subduction zone megathrusts host the largest and deadliest earthquakes on the planet. Over the past decades (primarily since the 2004 Sumatra event) our abilities to observe the build-up in slip deficit along these plate boundary zones has improved substantially with the development of relatively dense observing systems along major subduction zones. One, perhaps unexpected, result from these observations is a range of present-day behavior along the boundaries. Some regions show displacements (almost always observed on the upper plate along the boundary) that are consistent with elastic deformation driven by a fully locked plate interface, while other plate boundary segments (oftentimes along the same plate boundary system) show little or no plate motion directed displacements. This latter case is often interpreted as reflecting little to no coupling along the plate boundary interface. What is unclear is whether this spatial variation in apparent plate boundary interface behavior reflects true spatial differences in plate interface properties and mechanics, or may rather reflect temporal behavior of the plate boundary during the earthquake cycle. In our integrated observational and modeling analyses, we have come to the conclusion that much of what is seen as diverse behavior along subduction margins represents different time in the earthquake cycle (relative to recurrence rate and material properties) rather than fundamental differences between subduction zone mechanics. Our model-constrained conceptual model accounts for the following generalized observations: 1. Coseismic displacements are enhanced in "near-trench" region 2. Post-seismic relaxation varies with time and position landward - i.e. there is a propagation of the transition point from "post" (i.e. trenchward) to "inter" (i.e. landward) seismic displacement behavior. 3. Displacements immediately post-EQ (interpreted to be associated with "after slip" on megathrust?). 4. The post-EQ transient response can
Earthquake sequencing: chimera states with Kuramoto model dynamics on directed graphs
NASA Astrophysics Data System (ADS)
Vasudevan, K.; Cavers, M.; Ware, A.
2015-09-01
Earthquake sequencing studies allow us to investigate empirical relationships among spatio-temporal parameters describing the complexity of earthquake properties. We have recently studied the relevance of Markov chain models to draw information from global earthquake catalogues. In these studies, we considered directed graphs as graph theoretic representations of the Markov chain model and analyzed their properties. Here, we look at earthquake sequencing itself as a directed graph. In general, earthquakes are occurrences resulting from significant stress interactions among faults. As a result, stress-field fluctuations evolve continuously. We propose that they are akin to the dynamics of the collective behavior of weakly coupled non-linear oscillators. Since mapping of global stress-field fluctuations in real time at all scales is an impossible task, we consider an earthquake zone as a proxy for a collection of weakly coupled oscillators, the dynamics of which would be appropriate for the ubiquitous Kuramoto model. In the present work, we apply the Kuramoto model with phase lag to the non-linear dynamics on a directed graph of a sequence of earthquakes. For directed graphs with certain properties, the Kuramoto model yields synchronization, and inclusion of non-local effects evokes the occurrence of chimera states or the co-existence of synchronous and asynchronous behavior of oscillators. In this paper, we show how we build the directed graphs derived from global seismicity data. Then, we present conditions under which chimera states could occur and, subsequently, point out the role of the Kuramoto model in understanding the evolution of synchronous and asynchronous regions. We surmise that one implication of the emergence of chimera states will lead to investigation of the present and other mathematical models in detail to generate global chimera-state maps similar to global seismicity maps for earthquake forecasting studies.
Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software (OPAL)
NASA Astrophysics Data System (ADS)
Daniell, J. E.
2011-07-01
This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure was created to provide a framework for optimisation of a Global Earthquake Modelling process through: 1. overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost, and technology); 2. preliminary research, acquisition, and familiarisation for available ELE software packages; 3. assessment of these software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4. loss analysis for a deterministic earthquake (Mw = 7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment, Crowley et al., 2006), a capacity spectrum based method HAZUS (HAZards United States, FEMA, USA, 2003) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach, Lindholm et al., 2007) software which was adapted for use in order to compare the different processes needed for the production of damage, economic, and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data.
NASA Astrophysics Data System (ADS)
Crowley, H.; Modica, A.
2009-04-01
Loss estimates have been shown in various studies to be highly sensitive to the methodology employed, the seismicity and ground-motion models, the vulnerability functions, and assumed replacement costs (e.g. Crowley et al., 2005; Molina and Lindholm, 2005; Grossi, 2000). It is clear that future loss models should explicitly account for these epistemic uncertainties. Indeed, a cause of frequent concern in the insurance and reinsurance industries is precisely the fact that for certain regions and perils, available commercial catastrophe models often yield significantly different loss estimates. Of equal relevance to many users is the fact that updates of the models sometimes lead to very significant changes in the losses compared to the previous version of the software. In order to model the epistemic uncertainties that are inherent in loss models, a number of different approaches for the hazard, vulnerability, exposure and loss components should be clearly and transparently applied, with the shortcomings and benefits of each method clearly exposed by the developers, such that the end-users can begin to compare the results and the uncertainty in these results from different models. This paper looks at an application of a logic-tree type methodology to model the epistemic uncertainty in the vulnerability component of a loss model for Tunisia. Unlike other countries which have been subjected to damaging earthquakes, there has not been a significant effort to undertake vulnerability studies for the building stock in Tunisia. Hence, when presented with the need to produce a loss model for a country like Tunisia, a number of different approaches can and should be applied to model the vulnerability. These include empirical procedures which utilise observed damage data, and mechanics-based methods where both the structural characteristics and response of the buildings are analytically modelled. Some preliminary applications of the methodology are presented and discussed
Nonconservative current-induced forces: A physical interpretation
Dundas, Daniel; Paxton, Anthony T; Horsfield, Andrew P
2011-01-01
Summary We give a physical interpretation of the recently demonstrated nonconservative nature of interatomic forces in current-carrying nanostructures. We start from the analytical expression for the curl of these forces, and evaluate it for a point defect in a current-carrying system. We obtain a general definition of the capacity of electrical current flow to exert a nonconservative force, and thus do net work around closed paths, by a formal noninvasive test procedure. Second, we show that the gain in atomic kinetic energy over time, generated by nonconservative current-induced forces, is equivalent to the uncompensated stimulated emission of directional phonons. This connection with electron–phonon interactions quantifies explicitly the intuitive notion that nonconservative forces work by angular momentum transfer. PMID:22259754
Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2
Field, Edward H.; Weldon, Ray J., II; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.
2008-01-01
This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.
Instability model for recurring large and great earthquakes in southern California
Stuart, W.D.
1985-01-01
The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.
An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling
Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.
2009-01-01
We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global
SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records
Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.
2013-01-01
This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.
Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami
NASA Astrophysics Data System (ADS)
Showstack, Randy
2011-04-01
Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.
Modeling And Economics Of Extreme Subduction Earthquakes: Two Case Studies
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Emerson, D.; Perea, N.; Moulinec, C.
2008-05-01
The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K
ARMA models for earthquake ground motions. Seismic safety margins research program
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.; Oliver, R. M.; Pister, K. S.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulating earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
An improved geodetic source model for the 1999 Mw 6.3 Chamoli earthquake, India
NASA Astrophysics Data System (ADS)
Xu, Wenbin; Bürgmann, Roland; Li, Zhiwei
2016-04-01
We present a distributed slip model for the 1999 Mw 6.3 Chamoli earthquake of north India using interferometric synthetic aperture radar (InSAR) data from both ascending and descending orbits and Bayesian estimation of confidence levels and trade-offs of the model geometry parameters. The results of fault-slip inversion in an elastic half-space show that the earthquake ruptured a 9°_{-2.2}^{+3.4} northeast-dipping plane with a maximum slip of ˜1 m. The fault plane is located at a depth of ˜15.9_{ - 3.0}^{ + 1.1} km and is ˜120 km north of the Main Frontal Thrust, implying that the rupture plane was on the northernmost detachment near the mid-crustal ramp of the Main Himalayan Thrust. The InSAR-determined moment is 3.35 × 1018 Nm with a shear modulus of 30 GPa, equivalent to Mw 6.3, which is smaller than the seismic moment estimates of Mw 6.4-6.6. Possible reasons for this discrepancy include the trade-off between moment and depth, uncertainties in seismic moment tensor components for shallow dip-slip earthquakes and the role of earth structure models in the inversions. The released seismic energy from recent earthquakes in the Garhwal region is far less than the accumulated strain energy since the 1803 Ms 7.5 earthquake, implying substantial hazard of future great earthquakes.
Dynamic Models of Earthquakes and Tsunamis in the Santa Barbara Channel, California
NASA Astrophysics Data System (ADS)
Oglesby, David; Ryan, Kenny; Geist, Eric
2016-04-01
The Santa Barbara Channel and the adjacent Ventura Basin in California are the location of a number of large faults that extend offshore and could potentially produce earthquakes of magnitude greater than 7. The area is also home to hundreds of thousands of coastal residents. To properly evaluate the earthquake and tsunami hazard in this region requires the characterization of possible earthquake sources as well as the analysis of tsunami generation, propagation and inundation. Toward this end, we perform spontaneous dynamic earthquake rupture models of potential events on the Pitas Point/Lower Red Mountain faults, a linked offshore thrust fault system. Using the 3D finite element method, a realistic nonplanar fault geometry, and rate-state friction, we find that this fault system can produce an earthquake of up to magnitude 7.7, consistent with estimates from geological and paleoseismological studies. We use the final vertical ground deformation from our models as initial conditions for the generation and propagation of tsunamis to the shore, where we calculate inundation. We find that path and site effects lead to large tsunami amplitudes northward and eastward of the fault system, and in particular we find significant tsunami inundation in the low-lying cities of Ventura and Oxnard. The results illustrate the utility of dynamic earthquake modeling to produce physically plausible slip patterns and associated seafloor deformation that can be used for tsunami generation.
Locating and Modeling Regional Earthquakes with Broadband Waveform Data
NASA Astrophysics Data System (ADS)
Tan, Y.; Zhu, L.; Helmberger, D.
2003-12-01
Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Krastev, P. G.; Meade, B. J.
2015-12-01
Over the past 80 years, 8 MW>6.7 strike-slip earthquakes west of 40º longitude have ruptured the North Anatolian fault (NAF), largely from east to west. The series began with the 1939 Erzincan earthquake in eastern Turkey, and the most recent 1999 MW=7.4 Izmit earthquake extended the pattern of ruptures into the Sea of Marmara in western Turkey. The mean time between seismic events in this westward progression is 8.5±11 years (67% confidence interval), much greater than the timescale of seismic wave propagation (seconds to minutes). The delayed triggering of these earthquakes may be explained by the propagation of earthquake-generated diffusive viscoelastic fronts within the upper mantle that slowly increase the Coulomb failure stress change (CFS) at adjacent hypocenters. Here we develop three-dimensional stress transfer models with an elastic upper crust coupled to a viscoelastic Burgers rheology mantle. Both the Maxwell (ηM=1018.6-19.0 Pa•s) and Kelvin (ηK=1018.0-19.0 Pa•s) viscosities are constrained by viscoelastic block models that simultaneously explain geodetic observations of deformation before and after the 1999 Izmit earthquake. We combine this geodetically constrained rheological model with the observed sequence of large earthquakes since 1939 to calculate the time-evolution of CFS changes along the North Anatolian Fault due to viscoelastic stress transfer. Critical values of mean CFS at which the earthquakes in the eight decade sequence occur between -0.007 to 2.946 MPa and may exceed the magnitude of static CFS values by as much as 180%. The variability of four orders of magnitude in critical triggering stress may reflect along strike variations in NAF strength. Based on the median and mean of these critical stress values, we infer that the NAF strand in the northern Marmara Sea near Istanbul, which previously ruptured in 1509, may reach a critical stress level between 2015 and 2032.
NASA Astrophysics Data System (ADS)
Zhang, Zhe; Xun, Zhi-Peng; Wu, Ling; Chen, Yi-Li; Xia, Hui; Hao, Da-Peng; Tang, Gang
2016-06-01
In order to study the effects of the microscopic details of fractal substrates on the scaling behavior of the growth model, a generalized linear fractal Langevin-type equation, ∂h / ∂t =(- 1) m + 1 ν∇ mzrw h (zrw is the dynamic exponent of random walk on substrates), driven by nonconserved and conserved noise is proposed and investigated theoretically employing scaling analysis. Corresponding dynamic scaling exponents are obtained.
Source models of great earthquakes from ultra low-frequency normal mode data
NASA Astrophysics Data System (ADS)
Lentas, Konstantinos; Ferreira, Ana; Clévédé, Eric
2014-05-01
We present a new earthquake source inversion technique based on normal mode data for the simultaneous determination of the rupture duration, length and moment tensor of large earthquakes with unilateral rupture. We use ultra low-frequency (f < 1 mHz) normal mode spheroidal multiplets and the phases of split free oscillations, which are modelled using Higher Order Perturbation Theory (HOPT), taking into account the Earth's rotation, ellipticity and lateral heterogeneities. A Monte Carlo exploration of the model space is carried out, enabling the assessment of source parameter tradeoffs and uncertainties. We carry out synthetic tests for four different realistic artificial earthquakes with different faulting mechanisms and magnitudes (Mw 8.1-9.3) to investigate errors in the source inversions due to: (i) unmodelled 3-D Earth structure; (ii) noise in the data; (iii) uncertainties in spatio-temporal earthquake location; and, (iv) neglecting the source finiteness in point source moment tensor inversions. We find that unmodelled 3-D structure is the most serious source of errors for rupture duration and length determinations especially for the lowest magnitude artificial events. The errors in moment magnitude and fault mechanism are generally small, with the rake angle showing systematically larger errors (up to 20 degrees). We then carry out source inversions of five giant thrust earthquakes (Mw ≥ 8.5): (i) the 26 December 2004 Sumatra-Andaman earthquake; (ii) the 28 March 2005 Nias, Sumatra earthquake; (iii) the 12 September 2007 Bengkulu earthquake; (iv) the Tohoku, Japan earthquake of 11 March 2011; (v) the Maule, Chile earthquake of 27 February 2010; and (vi) the recent 24 May 2013 Mw 8.3 Okhotsk Sea, Russia, deep (607 km) earthquake. While finite source inversions for rupture length, duration, magnitude and fault mechanism are possible for the Sumatra-Andaman and Tohoku events, for all the other events their lower magnitudes do not allow stable inversions of mode
Cross-cultural comparisons between the earthquake preparedness models of Taiwan and New Zealand.
Jang, Li-Ju; Wang, Jieh-Jiuh; Paton, Douglas; Tsai, Ning-Yu
2016-04-01
Taiwan and New Zealand are both located in the Pacific Rim where 81 per cent of the world's largest earthquakes occur. Effective programmes for increasing people's preparedness for these hazards are essential. This paper tests the applicability of the community engagement theory of hazard preparedness in two distinct cultural contexts. Structural equation modelling analysis provides support for this theory. The paper suggests that the close fit between theory and data that is achieved by excluding trust supports the theoretical prediction that familiarity with a hazard negates the need to trust external sources. The results demonstrate that the hazard preparedness theory is applicable to communities that have previously experienced earthquakes and are therefore familiar with the associated hazards and the need for earthquake preparedness. The paper also argues that cross-cultural comparisons provide opportunities for collaborative research and learning as well as access to a wider range of potential earthquake risk management strategies. PMID:26282331
The investigation of blind continental earthquake sources through analogue and numerical models
NASA Astrophysics Data System (ADS)
Bonini, L.; Toscani, G.; Seno, S.
2012-04-01
One of the most challenging topic in earthquake geology is to characterize the seismogenic sources, i.e. the potential causative faults of earthquakes. The main seismogenic layer is located in the upper brittle crust. Nevertheless it does not mean that a fault take up the whole schizosphere: i.e. from the brittle-plastic transition to the surface. Indeed, latest damaging earthquakes were generated by blind or "hidden" faults: 23 Oct. 2011, Van earthquake (Mw 7.1, Turkey); 3 Sep 2010, Darfield earthquake (Mw 7.1, New Zealand); 12 January 2010 Haiti earthquake (Mw 7.0); 6 April 2009 L'Aquila earthquake (Mw 6.3, Italy). Therefore understand how a fault grows and develops is a key question to evaluate the seismogenic potential of an area. Analogue model was used to understand kinematics and geometry of the geological structures since the beginning of the modern geology. On the other hand, numerical model develops much more during the last thirty years. Nowadays we can use these two methods working together providing mutual interactions. In the two-three most recent years we tried to use both numerical and analogue models to investigate the long-term and short-term evolution of a blind normal fault. To do this we improved the Analogue Model Laboratory of the University of Pavia with a laser scanner, a stepper motor and other high resolution tools in order to detect the distribution of the deformation mainly induced by blind faults. The goal of this kind of approach is to mimic the effects of the faults movements in a scaled model. We selected two seismogenic source cases: the causative fault of the 1908 Messina earthquake (Mw 7.1) and that of the 2009 L'Aquila earthquake (Mw 6.3). In the first case we investigate the long term evolution of this structure using a set of analogue models and afterwards a numerical model of our sandbox allow us to investigate stress and strain partitioning. In the second case we performed only an analogue model of short-term evolution of
NASA Astrophysics Data System (ADS)
Rodgers, A. J.; Xie, X.; Petersson, A.
2007-12-01
The next major earthquake in the San Francisco Bay area is likely to occur on the Hayward-Rodgers Creek Fault system. Attention on the southern Hayward section is appropriate given the upcoming 140th anniversary of the 1868 M 7 rupture coinciding with the estimated recurrence interval. This presentation will describe ground motion simulations for large (M > 6.5) earthquakes on the Hayward Fault using a recently developed elastic finite difference code and high-performance computers at Lawrence Livermore National Laboratory. Our code easily reads the recent USGS 3D seismic velocity model of the Bay Area developed in 2005 and used for simulations of the 1906 San Francisco and 1989 Loma Prieta earthquakes. Previous work has shown that the USGS model performs very well when used to model intermediate period (4-33 seconds) ground motions from moderate (M ~ 4-5) earthquakes (Rodgers et al., 2008). Ground motions for large earthquakes are strongly controlled by the hypocenter location, spatial distribution of slip, rise time and directivity effects. These are factors that are impossible to predict in advance of a large earthquake and lead to large epistemic uncertainties in ground motion estimates for scenario earthquakes. To bound this uncertainty, we are performing suites of simulations of scenario events on the Hayward Fault using stochastic rupture models following the method of Liu et al. (Bull. Seism. Soc. Am., 96, 2118-2130, 2006). These rupture models have spatially variable slip, rupture velocity, rise time and rake constrained by characterization of inferred finite fault ruptures and expert opinion. Computed ground motions show variability due to the variability in rupture models and can be used to estimate the average and spread of ground motion measures at any particular site. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No.W-7405-Eng-48. This is
Analysing Post-Seismic Deformation of Izmit Earthquake with Insar, Gnss and Coulomb Stress Modelling
NASA Astrophysics Data System (ADS)
Alac Barut, R.; Trinder, J.; Rizos, C.
2016-06-01
On August 17th 1999, a Mw 7.4 earthquake struck the city of Izmit in the north-west of Turkey. This event was one of the most devastating earthquakes of the twentieth century. The epicentre of the Izmit earthquake was on the North Anatolian Fault (NAF) which is one of the most active right-lateral strike-slip faults on earth. However, this earthquake offers an opportunity to study how strain is accommodated in an inter-segment region of a large strike slip fault. In order to determine the Izmit earthquake post-seismic effects, the authors modelled Coulomb stress changes of the aftershocks, as well as using the deformation measurement techniques of Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS). The authors have shown that InSAR and GNSS observations over a time period of three months after the earthquake combined with Coulomb Stress Change Modelling can explain the fault zone expansion, as well as the deformation of the northern region of the NAF. It was also found that there is a strong agreement between the InSAR and GNSS results for the post-seismic phases of investigation, with differences less than 2mm, and the standard deviation of the differences is less than 1mm.
Time-predictable model applicability for earthquake occurrence in northeast India and vicinity
NASA Astrophysics Data System (ADS)
Panthi, A.; Shanker, D.; Singh, H. N.; Kumar, A.; Paudyal, H.
2011-03-01
Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986) for the period 1906-2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp) and not on the following mainshock (Mf). This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T) of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.
Validation and modeling of earthquake strong ground motion using a composite source model
NASA Astrophysics Data System (ADS)
Zeng, Y.
2001-12-01
Zeng et al. (1994) have proposed a composite source model for synthetic strong ground motion prediction. In that model, the source is taken as a superposition of circular subevents with a constant stress drop. The number of subevents and their radius follows a power law distribution equivalent to the Gutenberg and Richter's magnitude-frequency relation for seismicity. The heterogeneous nature of the composite source model is characterized by its maximum subevent size and subevent stress drop. As rupture propagates through each subevent, it radiates a Brune's pulse or a Sato and Hirasawa's circular crack pulse. The method has been proved to be successful in generating realistic strong motion seismograms in comparison with observations from earthquakes in California, eastern US, Guerrero of Mexico, Turkey and India. The model has since been improved by including scattering waves from small scale heterogeneity structure of the earth, site specific ground motion prediction using weak motion site amplification, and nonlinear soil response using geotechnical engineering models. Last year, I have introduced an asymmetric circular rupture to improve the subevent source radiation and to provide a consistent rupture model between overall fault rupture process and its subevents. In this study, I revisit the Landers, Loma Prieta, Northridge, Imperial Valley and Kobe earthquakes using the improved source model. The results show that the improved subevent ruptures provide an improved effect of rupture directivity compared to our previous studies. Additional validation includes comparison of synthetic strong ground motions to the observed ground accelerations from the Chi-Chi, Taiwan and Izmit, Turkey earthquakes. Since the method has evolved considerably when it was first proposed, I will also compare results between each major modification of the model and demonstrate its backward compatibility to any of its early simulation procedures.
Comparison of test and earthquake response modeling of a nuclear power plant containment building
Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.
1985-01-01
The reactor building of a BWR plant was subjected to dynamic testing, a minor earthquake, and a strong earthquake at different times. Analytical models simulating each of these events were devised by previous investigators. A comparison of the characteristics of these models is made in this paper. The different modeling assumptions involved in the different simulation analyses restrict the validity of the models for general use and also narrow the comparison down to only a few modes. The dynamic tests successfully identified the first mode of the soil-structure system.
NASA Astrophysics Data System (ADS)
Haeussler, P. J.; Witter, R. C.; Wang, K.
2013-12-01
The October 28, 2012 Mw 7.8 Haida Gwaii, British Columbia, earthquake was the second largest historical earthquake recorded in Canada. Earthquake seismology and GPS geodesy shows this was an underthrusting event, in agreement with prior studies that indicated oblique underthrusting of the Haida Gwaii by the Pacific plate. Coseismic deformation is poorly constrained by geodesy, with only six GPS sites and two tide gauge stations anywhere near the rupture area. In order to better constrain the coseismic deformation, we measured the upper limit of sessile intertidal organisms at 26 sites relative to sea level. We dominantly measured the positions of bladder weed (fucus distichus - 617 observations) and the common acorn barnacle (Balanus balanoides - 686 observations). Physical conditions control the upper limit of sessile intertidal organisms, so we tried to find the quietest water conditions, with steep, but not overhanging faces, where slosh from wave motion was minimized. We focused on the western side of the islands as rupture models indicated that the greatest displacement was there. However, we were also looking for calm water sites in bays located as close as possible to the often tumultuous Pacific Ocean. In addition, we made 322 measurements of sea level that will be used to develop a precise tidal model and to evaluate the position of the organisms with respect to a common sea level datum. We anticipate the resolution of the method will be about 20-30 cm. The sites were focused on the western side of the Haida Gwaii from Wells Bay on the south up to Otard Bay to the north, with 5 transects across strike. We also collected data at the town of Masset, which lies outside of the deformation zone of the earthquake. We observed dried and desiccated bands of fucus and barnacles at two sites on the western coast of southern Moresby Island (Gowgia Bay and Wells Bay). Gowgia Bay had the strongest evidence of uplift with fucus that was dried out and apparently dead. A
Damage and the Gutenberg-Richter Law: from simple models to natural earthquake fault systems
NASA Astrophysics Data System (ADS)
Tiampo, K. F.; Klein, W.; Rundle, J. B.; Dominguez, R.; Serino, C.
2010-12-01
Natural earthquake fault systems are highly nonhomogeneous in space, where these inhomogeneities occur because the earth is made of a variety of materials which hold and dissipate stress differently. One way that the inhomogeneous nature of fault systems manifests itself is in the spatial patterns which emerge in seismicity graphs (Tiampo et al., 2002, 2007). Despite their inhomogeneous nature, real faults are often modeled as spatially homogeneous systems. One argument for this approach is that earthquake faults experience long range stress transfer, and if this range is longer than the length scales associated with the inhomogeneities of the system, the dynamics of the system may be unaffected by their presence. However, it is not clear that this is the case. In this work we study the scaling of an earthquake model that is a variation of the Olami-Feder-Christensen (OFC) model, in order to explore the effect of spatial inhomogeneities on earthquake-like systems when interaction ranges are long, but not necessarily longer than the distances associated with those inhomogeneities (Rundle and Jackson, 1977; Olami et al., 1988). For long ranges and without inhomogeneities, such models have been found to produce scaling similar to GR scaling found in real earthquake systems (Rundle and Klein, 1993). In the earthquake models discussed here, damage is distributed inhomogeneously throughout and the interaction ranges, while long, are not longer than all of the damage length scales. We find that the scaling depends not only on the amount of damage, but also on the spatial distribution of that damage. In addition, we study the behaviour of particular natural earthquake faults and the spatial and temporal variation of GR scaling in those systems, in order to compare them with various damage cases from the simulations.
PAGER-CAT: A composite earthquake catalog for calibrating global fatality models
Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.
2009-01-01
We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are
NASA Astrophysics Data System (ADS)
Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Parker, Robert N.; Hilton, Robert G.
2014-04-01
we assess earthquake volume balance and the growth of mountains in the context of a new landslide inventory for the Mw 7.9 Wenchuan earthquake in central China. Coseismic landslides were mapped from high-resolution remote imagery using an automated algorithm and manual delineation, which allow us to distinguish clustered landslides that can bias landslide volume calculations. Employing a power-law landslide area-volume relation, we find that the volume of landslide-associated mass wasting (˜2.8 + 0.9/-0.7 km3) is lower than previously estimated (˜5.7-15.2 km3) and comparable to the volume of rock uplift (˜2.6 ± 1.2 km3) during the Wenchuan earthquake. If fluvial evacuation removes landslide debris within the earthquake cycle, then the volume addition from coseismic uplift will be effectively offset by landslide erosion. If all earthquakes in the region followed this volume budget pattern, the efficient counteraction of coseismic rock uplift raises a fundamental question about how earthquakes build mountainous topography. To provide a framework for addressing this question, we explore a group of scaling relations to assess earthquake volume balance. We predict coseismic uplift volumes for thrust-fault earthquakes based on geophysical models for coseismic surface deformation and relations between fault rupture parameters and moment magnitude, Mw. By coupling this scaling relation with landslide volume-Mw scaling, we obtain an earthquake volume balance relation in terms of moment magnitude Mw, which is consistent with the revised Wenchuan landslide volumes and observations from the 1999 Chi-Chi earthquake in Taiwan. Incorporating the Gutenburg-Richter frequency-Mw relation, we use this volume balance to derive an analytical expression for crustal thickening from coseismic deformation based on an index of seismic intensity over a defined area. This model yields reasonable rates of crustal thickening from coseismic deformation (e.g., ˜0.1-0.5 km Ma-1 in
Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting
NASA Astrophysics Data System (ADS)
Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie
2014-05-01
Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.
The 2007 Bengkulu earthquake, its rupture model and implications for seismic hazard
NASA Astrophysics Data System (ADS)
Ambikapathy, A.; Catherine, J. K.; Gahalaut, V. K.; Narsaiah, M.; Bansal, A.; Mahesh, P.
2010-08-01
The 12 September 2007 great Bengkulu earthquake ( M w 8.4) occurred on the west coast of Sumatra about 130 km SW of Bengkulu. The earthquake was followed by two strong aftershocks of M w 7.9 and 7.0. We estimate coseismic offsets due to the mainshock, derived from near-field Global Positioning System (GPS) measurements from nine continuous SuGAr sites operated by the California Institute of Technology (Caltech) group. Using a forward modelling approach, we estimated slip distribution on the causative rupture of the 2007 Bengkulu earthquake and found two patches of large slip, one located north of the mainshock epicenter and the other, under the Pagai Islands. Both patches of large slip on the rupture occurred under the island belt and shallow water. Thus, despite its great magnitude, this earthquake did not generate a major tsunami. Further, we suggest that the occurrence of great earthquakes in the subduction zone on either side of the Siberut Island region, might have led to the increase in static stress in the region, where the last great earthquake occurred in 1797 and where there is evidence of strain accumulation.
Modelling Psychological Responses to the Great East Japan Earthquake and Nuclear Incident
Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.
2012-01-01
The Great East Japan (Tōhoku/Kanto) earthquake of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380
An exact renormalization model for earthquakes and material failure: Statics and dynamics
Newman, W.I.; Gabrielov, A.M.; Durand, T.A.; Phoenix, S.L.; Turcotte, D.L.
1993-09-12
Earthquake events are well-known to prams a variety of empirical scaling laws. Accordingly, renormalization methods offer some hope for understanding why earthquake statistics behave in a similar way over orders of magnitude of energy. We review the progress made in the use of renormalization methods in approaching the earthquake problem. In particular, earthquake events have been modeled by previous investigators as hierarchically organized bundles of fibers with equal load sharing. We consider by computational and analytic means the failure properties of such bundles of fibers, a problem that may be treated exactly by renormalization methods. We show, independent of the specific properties of an individual fiber, that the stress and time thresholds for failure of fiber bundles obey universal, albeit different, staling laws with respect to the size of the bundles. The application of these results to fracture processes in earthquake events and in engineering materials helps to provide insight into some of the observed patterns and scaling-in particular, the apparent weakening of earthquake faults and composite materials with respect to size, and the apparent emergence of relatively well-defined stresses and times when failure is seemingly assured.
Modeling the 1992 Landers Earthquake with a Rate and State Friction Model.
NASA Astrophysics Data System (ADS)
Mohammedi, H.; Madariaga, R.; Perrin, G.
2002-12-01
We study rupture propagation in realistic earthquake models under rate and state dependent friction and we apply it to the modeling of the 28 June 1992, Landers earthquake. In our simulations we use a modified version of rate and state proposed by Perrin, Rice and Zheng, the so called PRZ law. Full inversion with PRZ is not yet possible because of the much higher numerical cost of modeling a fault under rate and state than with slip weakening friction laws (SW). Also PRZ has a larger number of independent parameters than slip weakening. We obtain reasonable initial models through the use of the ratio κ between available strain energy and energy relase rate. Because in PRZ friction there are more parameters than in SW we have not yet been able to identify all relevant non-dimensional numbers that control rupture in this model, but a very important one is a logarithmic map that controls whether instable slip may occur or not. This map has the form log ˙ D/v0 = λ ˙ D/v0, where λ is a nondimensional number akin to κ . It includes the parameters of the friction law and the characteristic length of the initial stress, velocity or state fields. ˙ D is slip velocity and v0 a reference speed that defines the initial stress field. Using the results of dynamic inversion from Peyrat et al, we find reasonable rupture models for the initiation of the Landers earthquake. The slip weakening distance in rate and state Dc, as defined by Bizarri and Cocco, is of the order of a few tens of cm. Dc is determined from L, the relaxation length in rate and state, as a subproduct of the logarithmic map cited above.
"Slimplectic" Integrators: Variational Integrators for General Nonconservative Systems
NASA Astrophysics Data System (ADS)
Tsang, David; Galley, Chad R.; Stein, Leo C.; Turner, Alec
2015-08-01
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
NASA Astrophysics Data System (ADS)
Rong, Y.; Bird, P.; Jackson, D. D.
2016-04-01
The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two
Near-real Time Interpretation of Micro-earthquake Data for Reservoir Modeling
NASA Astrophysics Data System (ADS)
Hutchings, L. J.; Boyle, K.; Bonner, B. P.
2009-12-01
Geothermal, CO2 sequestration, oil and gas reservoir modeling depends on identifying reservoir geology, fractures, fluids, and permeable zones. We present an approach that utilizes passive seismic methods to update reservoir models in near-real time. Recent developments of inexpensive micro-earthquake recorders and sensors, high performance desktop computer capabilities, high resolution tomographic imaging techniques, high resolution micro-earthquake location programs, and new developments in interpretation can significantly improve reservoir exploration, exploitation, and management at reasonable costs in time and dollars. We have developed a rapid and inexpensive reservoir modeling package based on interpretation of micro-earthquake recordings analysis. The package includes an automated P- and S-wave picker, high-resolution double-difference earthquake locations, 3-D tomographic inversions for P- and S-wave velocity structure and attenuation (Qp and Qs) structure, and seismic moments and stress drops. We utilize a three-dimensional visualization program to examine spatial associations and correlations of reservoir properties, and apply rock physics (including effective medium theories) in interpretation. Modeling is typically in the depth range of reservoirs of interest, usually surface to 5 Km depth, and depends upon sufficient numbers of earthquakes, usually 100 - 500 events. This can be updated regularly to monitor temporal changes. We demonstrate this package with The Geysers and Salton Sea geothermal fields.
NASA Astrophysics Data System (ADS)
Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray
2013-04-01
Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.
Breit interaction and parity nonconservation in many-electron atoms
Dzuba, V. A.; Flambaum, V. V.; Safronova, M. S.
2006-02-15
We present accurate ab initio nonperturbative calculations of the Breit correction to the parity nonconserving (PNC) amplitudes of the 6s-7s and 6s-5d{sub 3/2} transitions in Cs, 7s-8s and 7s-6d{sub 3/2} transitions in Fr, 6s-5d{sub 3/2} transition in Ba{sup +}, 7s-6d{sub 3/2} transition in Ra{sup +}, and 6p{sub 1/2}-6p{sub 3}/{sub 2} transition in Tl. The results for the 6s-7s transition in Cs and 7s-8s transition in Fr are in good agreement with other calculations. We demonstrate that higher-orders many-body corrections to the Breit interaction are especially important for the s-d PNC amplitudes. We confirm good agreement of the PNC measurements for cesium and thallium with the standard model.
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
NASA Astrophysics Data System (ADS)
Fielding, E. J.; Sladen, A.; Simons, M.; Rosen, P. A.; Yun, S.; Li, Z.; Avouac, J.; Leprince, S.
2010-12-01
Earthquake responders need to know where the earthquake has caused damage and what is the likely intensity of damage. The earliest information comes from global and regional seismic networks, which provide the magnitude and locations of the main earthquake hypocenter and moment tensor centroid and also the locations of aftershocks. Location accuracy depends on the availability of seismic data close to the earthquake source. Finite fault models of the earthquake slip can be derived from analysis of seismic waveforms alone, but the results can have large errors in the location of the fault ruptures and spatial distribution of slip, which are critical for estimating the distribution of shaking and damage. Geodetic measurements of ground displacements with GPS, LiDAR, or radar and optical imagery provide key spatial constraints on the location of the fault ruptures and distribution of slip. Here we describe the analysis of interferometric synthetic aperture radar (InSAR) and sub-pixel correlation (or pixel offset tracking) of radar and optical imagery to measure ground coseismic displacements for recent large earthquakes, and lessons learned for rapid assessment of future events. These geodetic imaging techniques have been applied to the 2010 Leogane, Haiti; 2010 Maule, Chile; 2010 Baja California, Mexico; 2008 Wenchuan, China; 2007 Tocopilla, Chile; 2007 Pisco, Peru; 2005 Kashmir; and 2003 Bam, Iran earthquakes, using data from ESA Envisat ASAR, JAXA ALOS PALSAR, NASA Terra ASTER and CNES SPOT5 satellite instruments and the NASA/JPL UAVSAR airborne system. For these events, the geodetic data provided unique information on the location of the fault or faults that ruptured and the distribution of slip that was not available from the seismic data and allowed the creation of accurate finite fault source models. In many of these cases, the fault ruptures were on previously unknown faults or faults not believed to be at high risk of earthquakes, so the area and degree of
Interevent times in a new alarm-based earthquake forecasting model
NASA Astrophysics Data System (ADS)
Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed
2013-09-01
This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.
2015-12-01
We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.
Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide
Miles, Scott B.; Keefer, David K.
2007-01-01
This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models
NASA Astrophysics Data System (ADS)
Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.
2012-04-01
The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.
Bounded solutions for nonconserving-parity pseudoscalar potentials
Castro, Antonio S. de; Malheiro, Manuel; Lisboa, Ronai
2004-12-02
The Dirac equation is analyzed for nonconserving-parity pseudoscalar radial potentials in 3+1 dimensions. It is shown that despite the nonconservation of parity this general problem can be reduced to a Sturm-Liouville problem of nonrelativistic fermions in spherically symmetric effective potentials. The searching for bounded solutions is done for the power-law and Yukawa potentials. The use of the methodology of effective potentials allow us to conclude that the existence of bound-state solutions depends whether the potential leads to a definite effective potential-well structure or to an effective potential less singular than -1/4r2.
How Fault Geometry Affects Dynamic Rupture Models of Earthquakes in San Gorgonio Pass, CA
NASA Astrophysics Data System (ADS)
Tarnowski, J. M.; Oglesby, D. D.; Cooke, M. L.; Kyriakopoulos, C.
2015-12-01
We use 3D dynamic finite element models to investigate potential rupture paths of earthquakes propagating along faults in the western San Gorgonio Pass (SGP) region of California. The SGP is a structurally complex area along the southern California portion of the San Andreas fault system (SAF). It has long been suspected that this structural knot, which consists of the intersection of various non-planar strike-slip and thrust fault segments, may inhibit earthquake rupture propagation between the San Bernardino and Banning strands of the SAF. The above condition may limit the size of potential earthquakes in the region. Our focus is on the San Bernardino strand of the SAF and the San Gorgonio Pass Fault zone, where the fault connectivity is not well constrained. We use the finite element code FaultMod (Barall, 2009) to investigate how fault connectivity, nucleation location, and initial stresses influence rupture propagation and ground motion, including the likelihood of through-going rupture in this region. Preliminary models indicate that earthquakes that nucleate on the San Bernardino strand and propagate southward do not easily transfer rupture to the thrust faults of the San Gorgonio Pass fault zone. However, under certain assumptions, earthquakes that nucleate along the San Gorgonio Pass fault zone can transfer rupture to the San Bernardino strand.
Earthquake vulnerability and risk modeling for the area of Greater Cairo, Egypt
NASA Astrophysics Data System (ADS)
Tyagunov, S.; Abdel-Rahman, K.; El-Hady, S.; El-Ela Mohamed, A.; Stempniewski, L.; Liesch, T.; Zschau, J.
2009-04-01
Egypt is a country of low-to-moderate earthquake hazard. However, the earthquake risk potential (in terms of both probable economic and human losses) is rather high. Population of Egypt (according to the Central Agency for Public Mobilisation and Statistics - CAPMAS) is about 80 million. At the same time the distribution of the population in the country is far from uniform. In particular, the area of Greater Cairo attracts migrants from the whole country and the metropolitan area faces the problem of unplanned urbanization. Due to the high density of population and vulnerability of the existing building stock the potential for earthquake damage and loss in the area is a problem of great concern. The area under study covers 43 administrative districts of Greater Cairo (including the City of Cairo, El-Giza and Shubra El-Kheima), where field investigations were conducted aiming at identifying representative building types and assessing their seismic vulnerability. On the base of collected information, combining the findings of the field investigations in different districts with available statistical data about the distribution of buildings in the districts, we constructed vulnerability composition models (in terms of the vulnerability classes of the European Macroseismic Scale, EMS-98) for all the considered districts of Greater Cairo. The vulnerability models are applicable for analysis of potential damage and losses in case of occurring damaging earthquakes in the region, including zonation of the seismic risk in the area, generation of probable earthquake scenarios and rapid damage and loss assessment for the purposes of emergency management.
Modeling a Wide Spectrum of Fault Slip Behavior in Cascadia With the Earthquake Simulator RSQSim
NASA Astrophysics Data System (ADS)
Richards-Dinger, K. B.; Dieterich, J. H.
2014-12-01
Through the judicious use of approximations, earthquake simulators hope to accurately modelthe evolution of fault slip over long time periods (tens of thousands to hundreds ofthousands of years) in complicated regional- to plate-boundary-scale systems of faults. RSQSim is one such simulator which, through its use of an approximate form of rate- andstate-dependent friction, is able to capture the observed short-term power-law clusteringbehavior of earthquakes as well as model the two dominant obeserved modes of non-seismicslip: steady creep and slow slip events (SSEs). The creeping sections of the fault systemare modeled as always at steady-state such that the slip-speed is a simple function of theapplied stresses, while SSE-generating sections use (an approximate form of) the mechanismof Shibazaki and Iio (2003). The work we will present here on the Cascadian subduction system is part of a larger projectto perform unified simulations of the entire western US plate boundary region. In it we userealistic plate interface (and upper-plate fault system) geometries and distributions offrictional properties to address issues such as: the relationship between the short-termphenomena of earthquake triggering and clustering and the long-term recurrence of largeearthquakes implied by steady tectonic forcing; the interaction between fault sections withdifferent modes of slip prior to and in response to earthquakes (specifically includingpossible iteractions between SSEs and large subduction earthquakes); interactions betweenthe main subduction thrust and upper plate faults; and the effects of quenched versusdynamical heterogeneities on rupture processes.
Gutenberg-Richter and characteristic earthquake behavior in Simple Models of Heterogeneous Faults
NASA Astrophysics Data System (ADS)
Dahmen, K. A.; Fisher, D. S.; Ben-Zion, Y.; Ertas, D.; Ramanathan, S.
2001-05-01
The statistics of earthquakes has been a subject of research for a long time. One spectacular feature is the wide range of observed earthquake sizes, spanning over ten orders of magnitude. Gutenberg and Richter found that the size distribution of regional earthquakes follows a power law over the entire range of observed events. Recently enough data has been collected to be able to extract statistics on individual narrow earthquake fault zones. Wesnousky and coworkers found that fault zones with highly irregular geometry, such as the San Jacinto fault in California, which has many offsets and branches, display universal Gutenberg-Richter type power law statistics over the entire range of observed magnitudes. On the other hand, the available data show that faults with more regular geometry (presumably generated progressively with increasing cumulative slip), such as the San Andreas fault in California, display power law distributions only for small events, which occur between approximately periodically recurring events of a much larger characteristic size, which rupture the entire fault. There are practically no earthquakes of intermediate magnitudes observed in these faults. Two important questions emerge immediately: (1) Why does one find earthquakes of all sizes even on a single earthquake fault? (One might have expected for example a typical size with small variations instead.) (2) Why do not all fault zones display the same distribution of earthquake magnitudes, but rather separate into the two classes described~? We have studied simple models for ruptures along a heterogeneous earthquake fault zone, focusing on the interplay between the roles of disorder and dynamical effects. A class of models were found to operate naturally at a critical point whose properties yield power law scaling of earthquake statistics. The analytically computed critical exponent for the power law distribution lies well within the error bars of the observed Gutenberg-Richter power law
GEM1: First-year modeling and IT activities for the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Anderson, G.; Giardini, D.; Wiemer, S.
2009-04-01
GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global
Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination
NASA Astrophysics Data System (ADS)
Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.
2008-12-01
Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.
Toward a Global Model for Predicting Earthquake-Induced Landslides in Near-Real Time
NASA Astrophysics Data System (ADS)
Nowicki, M. A.; Wald, D. J.; Hamburger, M. W.; Hearne, M.; Thompson, E.
2013-12-01
We present a newly developed statistical model for estimating the distribution of earthquake-triggered landslides in near-real time, which is designed for use in the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) and ShakeCast systems. We use standardized estimates of ground shaking from the USGS ShakeMap Atlas 2.0 to develop an empirical landslide probability model by combining shaking estimates with broadly available landslide susceptibility proxies, including topographic slope, surface geology, and climatic parameters. While the initial model was based on four earthquakes for which digitally mapped landslide inventories and well constrained ShakeMaps are available--the Guatemala (1976), Northridge, California (1994), Chi-Chi, Taiwan (1999), and Wenchuan, China (2008) earthquakes, our improved model includes observations from approximately ten other events from a variety of tectonic and geomorphic settings for which we have obtained landslide inventories. Using logistic regression, this database is used to build a predictive model of the probability of landslide occurrence. We assess the performance of the regression model using statistical goodness-of-fit metrics to determine which combination of the tested landslide proxies provides the optimum prediction of observed landslides while minimizing ';false alarms' in non-landslide zones. Our initial results indicate strong correlations with peak ground acceleration and maximum slope, and weaker correlations with surface geological and soil wetness proxies. In terms of the original four events included, the global model predicts landslides most accurately when applied to the Wenchuan and Chi-Chi events, and less accurately when applied to the Northridge and Guatemala datasets. Combined with near-real time ShakeMaps, the model can be used to make generalized predictions of whether or not landslides are likely to occur (and if so, where) for future earthquakes around the globe, and these estimates
Bakun, W.H.; Scotti, O.
2006-01-01
Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
The Negative Binomial Distribution as a Renewal Model for the Recurrence of Large Earthquakes
NASA Astrophysics Data System (ADS)
Tejedor, Alejandro; Gómez, Javier B.; Pacheco, Amalio F.
2015-01-01
The negative binomial distribution is presented as the waiting time distribution of a cyclic Markov model. This cycle simulates the seismic cycle in a fault. As an example, this model, which can describe recurrences with aperiodicities between 0 and 0.5, is used to fit the Parkfield, California earthquake series in the San Andreas Fault. The performance of the model in the forecasting is expressed in terms of error diagrams and compared with other recurrence models from literature.
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
Joint earthquake source inversions using seismo-geodesy and 3-D earth models
NASA Astrophysics Data System (ADS)
Weston, J.; Ferreira, A. M. G.; Funning, G. J.
2014-08-01
A joint earthquake source inversion technique is presented that uses InSAR and long-period teleseismic data, and, for the first time, takes 3-D Earth structure into account when modelling seismic surface and body waves. Ten average source parameters (Moment, latitude, longitude, depth, strike, dip, rake, length, width and slip) are estimated; hence, the technique is potentially useful for rapid source inversions of moderate magnitude earthquakes using multiple data sets. Unwrapped interferograms and long-period seismic data are jointly inverted for the location, fault geometry and seismic moment, using a hybrid downhill Powell-Monte Carlo algorithm. While the InSAR data are modelled assuming a rectangular dislocation in a homogeneous half-space, seismic data are modelled using the spectral element method for a 3-D earth model. The effect of noise and lateral heterogeneity on the inversions is investigated by carrying out realistic synthetic tests for various earthquakes with different faulting mechanisms and magnitude (Mw 6.0-6.6). Synthetic tests highlight the improvement in the constraint of fault geometry (strike, dip and rake) and moment when InSAR and seismic data are combined. Tests comparing the effect of using a 1-D or 3-D earth model show that long-period surface waves are more sensitive than long-period body waves to the change in earth model. Incorrect source parameters, particularly incorrect fault dip angles, can compensate for systematic errors in the assumed Earth structure, leading to an acceptable data fit despite large discrepancies in source parameters. Three real earthquakes are also investigated: Eureka Valley, California (1993 May 17, Mw 6.0), Aiquile, Bolivia (1998 February 22, Mw 6.6) and Zarand, Iran (2005 May 22, Mw 6.5). These events are located in different tectonic environments and show large discrepancies between InSAR and seismically determined source models. Despite the 40-50 km discrepancies in location between previous geodetic and
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for
NASA Astrophysics Data System (ADS)
Ferreira, A. M.; Weston, J. M.; Funning, G. J.
2011-12-01
Earthquake source models are routinely determined using seismic data and are reported in many seismic catalogues, such as the Global Centroid Moment Tensor (GCMT) catalogue. Recent advances in space geodesy, such as InSAR, have enabled the estimation of earthquake source parameters from the measurement of deformation of the Earth's surface, independently of seismic information. The absence of an earthquake catalogue based on geodetic data prompted the compilation of a large InSAR database of CMT parameters from the literature (Weston et al., 2011, hereafter referred to as the ICMT database). Information provided in published InSAR studies of earthquakes is used to obtain earthquake source parameters, and equivalent CMT parameters. Multiple studies of the same earthquake are included in the database, as they are valuable to assess uncertainties in source models. Here, source parameters for 70 earthquakes in an updated version of the ICMT database are compared with those reported in global and regional seismic catalogues. There is overall good agreement between parameters, particularly in fault strike, dip and rake. However, InSAR centroid depths are systematically shallower (5-10 km) than those in the EHB catalogue, but this is reduced for depths from inversions of InSAR data that use a layered half-space. Estimates of the seismic moment generally agree well between the two datasets, but for thrust earthquakes there is a slight tendency for the InSAR-determined seismic moment to be larger. Centroid locations from the ICMT database are in good agreement with those from regional seismic catalogues with a median distance of ~6 km between them, which is smaller than for comparisons with global catalogues (17.0 km and 9.2 km for the GCMT and ISC catalogues, respectively). Systematic tests of GCMT-like inversions have shown that similar mislocations occur for several different degree 20 Earth models (Ferreira et al., 2011), suggesting that higher resolution Earth models
NASA Astrophysics Data System (ADS)
D'Onza, F.; Viti, M.; Mantovani, E.; Albarello, D.
2003-04-01
EARTHQUAKE TRIGGERING IN THE PERI-ADRIATIC REGIONS INDUCED BY STRESS DIFFUSION: INSIGHTS FROM NUMERICAL MODELLING F. D’Onza (1), M. Viti (1), E. Mantovani (1) and D. Albarello (1) (1) Dept. of Earth Sciences, University of Siena - Italy (donza@unisi.it/Fax:+39-0577-233820) Significant evidence suggests that major earthquakes in the peri-Adriatic Balkan zones may influence the seismicity pattern in the Italian area. In particular, a seismic correlation has been recognized between major earthquakes in the southern Dinaric belt and those in southern Italy. It is widely recognized that such kind of regularities may be an effect of postseismic relaxation triggered by strong earthquakes. In this note, we describe an attempt to quantitatively investigate, by numerical modelling, the reliability of the above interpretation. In particular, we have explored the possibility to explain the last example of the presumed correlation (triggering event: April, 1979 Montenegro earthquake, MS=6.7; induced event: November, 1980 Irpinia event, MS=6.9) as an effect of postseismic relaxation through the Adriatic plate. The triggering event is modelled by imposing a sudden dislocation in the Montenegro seismic fault, taking into account the fault parameters (length and average slip) recognized from seismological observations. The perturbation induced by the seismic source in the neighbouring lithosphere is obtained by the Elsasser diffusion equation for an elastic lithosphere coupled with a viscous asthenosphere. The results obtained by numerical experiments indicate that the strain regime induced by the Montenegro event in southern Italy is compatible with the tensional strain field observed in this last zone, that the amplitude of the induced strain is significantly higher than that induced by Earth tides and that this amplitude is comparable with the strain perturbation recognized as responsible for earthquake triggering. The time delay between the triggering and the induced
NASA Astrophysics Data System (ADS)
Gao, Yongxin; Harris, Jerry M.; Wen, Jian; Huang, Yihe; Twardzik, Cedric; Chen, Xiaofei; Hu, Hengshan
2016-01-01
The coseismic electromagnetic signals observed during the 2004 Mw 6 Parkfield earthquake are simulated using electrokinetic theory. By using a finite fault source model obtained via kinematic inversion, we calculate the electric and magnetic responses to the earthquake rupture. The result shows that the synthetic electric signals agree with the observed data for both amplitude and wave shape, especially for early portions of the records (first 9 s) after the earthquake, supporting the electrokinetic effect as the reasonable mechanism for the generation of the coseismic electric fields. More work is needed to explain the magnetic fields and the later portions of the electric fields. Analysis shows that the coseismic electromagnetic (EM) signals are sensitive to both the material properties at the location of the EM sensors and the electrochemical heterogeneity in the vicinity of the EM sensors and can be used to characterize the underground electrochemical properties.
Hasumi, Tomohiro
2008-11-13
We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.
Nonconservative current-driven dynamics: beyond the nanoscale.
Cunningham, Brian; Todorov, Tchavdar N; Dundas, Daniel
2015-01-01
Long metallic nanowires combine crucial factors for nonconservative current-driven atomic motion. These systems have degenerate vibrational frequencies, clustered about a Kohn anomaly in the dispersion relation, that can couple under current to form nonequilibrium modes of motion growing exponentially in time. Such motion is made possible by nonconservative current-induced forces on atoms, and we refer to it generically as the waterwheel effect. Here the connection between the waterwheel effect and the stimulated directional emission of phonons propagating along the electron flow is discussed in an intuitive manner. Nonadiabatic molecular dynamics show that waterwheel modes self-regulate by reducing the current and by populating modes in nearby frequency, leading to a dynamical steady state in which nonconservative forces are counter-balanced by the electronic friction. The waterwheel effect can be described by an appropriate effective nonequilibrium dynamical response matrix. We show that the current-induced parts of this matrix in metallic systems are long-ranged, especially at low bias. This nonlocality is essential for the characterisation of nonconservative atomic dynamics under current beyond the nanoscale. PMID:26665086
Nonconservative current-driven dynamics: beyond the nanoscale
Todorov, Tchavdar N; Dundas, Daniel
2015-01-01
Summary Long metallic nanowires combine crucial factors for nonconservative current-driven atomic motion. These systems have degenerate vibrational frequencies, clustered about a Kohn anomaly in the dispersion relation, that can couple under current to form nonequilibrium modes of motion growing exponentially in time. Such motion is made possible by nonconservative current-induced forces on atoms, and we refer to it generically as the waterwheel effect. Here the connection between the waterwheel effect and the stimulated directional emission of phonons propagating along the electron flow is discussed in an intuitive manner. Nonadiabatic molecular dynamics show that waterwheel modes self-regulate by reducing the current and by populating modes in nearby frequency, leading to a dynamical steady state in which nonconservative forces are counter-balanced by the electronic friction. The waterwheel effect can be described by an appropriate effective nonequilibrium dynamical response matrix. We show that the current-induced parts of this matrix in metallic systems are long-ranged, especially at low bias. This nonlocality is essential for the characterisation of nonconservative atomic dynamics under current beyond the nanoscale. PMID:26665086
A model for earthquakes near Palisades Reservoir, southeast Idaho
Schleicher, David
1975-01-01
The Palisades Reservoir seems to be triggering earthquakes: epicenters are concentrated near the reservoir, and quakes are concentrated in spring, when the reservoir level is highest or is rising most rapidly, and in fall, when the level is lowest. Both spring and fall quakes appear to be triggered by minor local stresses superposed on regional tectonic stresses; faulting is postulated to occur when the effective normal stress across a fault is decreased by a local increase in pore-fluid pressure. The spring quakes tend to occur when the reservoir level suddenly rises: increased pore pressure pushes apart the walls of the graben flooded by the reservoir, thus decreasing the effective normal stress across faults in the graben. The fall quakes tend to occur when the reservoir level is lowest: water that gradually infiltrated poorly permeable (fault-gouge?) zones during high reservoir stands is then under anomalously high pressure, which decreases the effective normal stress across faults in the poorly permeable zones.
NASA Astrophysics Data System (ADS)
Rotondi, Renata; Varini, Elisa
2016-04-01
The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.
Forecast model for great earthquakes at the Nankai Trough subduction zone
Stuart, W.D.
1988-01-01
An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.
Steady-state statistical mechanics of model and real earthquakes (Invited)
NASA Astrophysics Data System (ADS)
Main, I. G.; Naylor, M.
2010-12-01
We derive an analytical expression for entropy production in earthquake populations based on Dewar’s formulation, including flux (tectonic forcing) and source (earthquake population) terms, and apply it to the Olami-Feder-Christensen (OFC) numerical model for earthquake dynamics. Assuming the commonly-observed power-law rheology between driving stress and remote strain rate, we test the hypothesis that maximum entropy production (MEP) is a thermodynamic driver for self-organized ‘criticality’ (SOC) in the model. MEP occurs when the global elastic strain is near, but strictly sub-critical, with small relative fluctuations in macroscopic strain energy expressed by a low seismic efficiency, and broad-bandwidth power-law scaling of frequency and rupture area. These phenomena, all as observed in natural earthquake populations, are hallmarks of the broad conceptual definition of SOC, which to date has often in practice included self-organizing systems in a near but strictly sub-critical state. In contrast the precise critical point represents a state of minimum entropy production in the model. In the MEP state the strain field retains some memory of past events, expressed as coherent ‘domains’, implying a degree of predictability, albeit strongly limited in practice by the proximity to criticality, our inability to map the stress field at an equivalent resolution to the numerical model, and finite temporal sampling effects in real data.
Neural network models for earthquake magnitude prediction using multiple seismicity indicators.
Panakkat, Ashif; Adeli, Hojjat
2007-02-01
Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region. PMID:17393560
Quasi-hidden Markov model and its applications in cluster analysis of earthquake catalogs
NASA Astrophysics Data System (ADS)
Wu, Zhengxiao
2011-12-01
We identify a broad class of models, quasi-hidden Markov models (QHMMs), which include hidden Markov models (HMMs) as special cases. Applying the QHMM framework, this paper studies how an earthquake cluster propagates statistically. Two QHMMs are used to describe two different propagating patterns. The "mother-and-kids" model regards the first shock in an earthquake cluster as "mother" and the aftershocks as "kids," which occur in a neighborhood centered by the mother. In the "domino" model, however, the next aftershock strikes in a neighborhood centered by the most recent previous earthquake in the cluster, and therefore aftershocks act like dominoes. As the likelihood of QHMMs can be efficiently computed via the forward algorithm, likelihood-based model selection criteria can be calculated to compare these two models. We demonstrate this procedure using data from the central New Zealand region. For this data set, the mother-and-kids model yields a higher likelihood as well as smaller AIC and BIC. In other words, in the aforementioned area the next aftershock is more likely to occur near the first shock than near the latest aftershock in the cluster. This provides an answer, though not entirely satisfactorily, to the question "where will the next aftershock be?". The asymptotic consistency of the model selection procedure in the paper is duly established, namely that, when the number of the observations goes to infinity, with probability one the procedure picks out the model with the smaller deviation from the true model (in terms of relative entropy rate).
Modelling of Strong Ground Motions from 1991 Uttarkashi, India, Earthquake Using a Hybrid Technique
NASA Astrophysics Data System (ADS)
Kumar, Dinesh; Teotia, S. S.; Sriram, V.
2011-10-01
We present a simple and efficient hybrid technique for simulating earthquake strong ground motion. This procedure is the combination of the techniques of envelope function (M idorikawa et al. Tectonophysics 218:287-295, 1993) and composite source model (Z eng et al. Geophys Res Lett 21:725-728, 1994). The first step of the technique is based on the construction of the envelope function of the large earthquake by superposition of envelope functions for smaller earthquakes. The smaller earthquakes (sub-events) of varying sizes are distributed randomly, instead of uniform distribution of same size sub-events, on the fault plane. The accelerogram of large event is then obtained by combining the envelope function with a band-limited white noise. The low-cut frequency of the band-limited white noise is chosen to correspond to the corner frequency for the target earthquake magnitude and the high-cut to the Boore's f max or a desired frequency for the simulation. Below the low-cut frequency, the fall-off slope is 2 in accordance with the ω2 earthquake source model. The technique requires the parameters such as fault area, orientation of the fault, hypocenter, size of the sub-events, stress drop, rupture velocity, duration, source-site distance and attenuation parameter. The fidelity of the technique has been demonstrated by successful modeling of the 1991 Uttarkashi, Himalaya earthquake (Ms 7). The acceptable locations of the sub-events on the fault plane have been determined using a genetic algorithm. The main characteristics of the simulated accelerograms, comprised of the duration of strong ground shaking, peak ground acceleration and Fourier and response spectra, are, in general, in good agreement with those observed at most of the sites. At some of the sites the simulated accelerograms differ from observed ones by a factor of 2-3. The local site geology and topography may cause such a difference, as these effects have not been considered in the present technique. The
Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand
NASA Astrophysics Data System (ADS)
Francois-Holden, C.; Zhao, J.
2012-12-01
The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground
Simulation of the Burridge-Knopoff Model of Earthquakes with Variable Range Stress Transfer
Xia Junchao; Gould, Harvey; Klein, W.; Rundle, J.B.
2005-12-09
Simple models of earthquake faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton models have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff model, which includes more realistic friction forces and inertia. We find that the latter model with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton models and the usual Burridge-Knopoff model with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of earthquakes and other driven dissipative systems.
Assessing the benefit of 3D a priori models for earthquake location
NASA Astrophysics Data System (ADS)
Tilmann, F. J.; Manzanares, A.; Peters, K.; Kahle, R. L.; Lange, D.; Saul, J.; Nooshiri, N.
2014-12-01
Earthquake location in 1D Earth models is a routine procedure. Particularly in environments such as subduction zones where the network geometry is biased and lateral velocity variations are large, the use of a 1D model can lead to strongly biased solutions. This is well-known and it is therefore usually preferred to use three-dimensional models, e.g. from local earthquake tomography. Efficient codes for earthquake location in 3D models are available for routine use, for example NonLinLoc. However, tomographic studies are time-consuming to carry out, and a sufficient number of data might not always be available. However, in many cases, information about the three-dimensional velocity structure is available in the form of refraction surveys or other constraints such as gravity or receiver functions based models. Failing that, global or regional scale crustal models could be employed. However, it is not obvious that models derived using different types of data lead to better location results than an optimised 1D velocity model. On the other hand, correct interpretation of seismicity patterns often requires comparison and exaxt positioning in pre-existing velocity models. In this presentation we draw on examples from the Chilean and Sumatran margins as well as a mid-ocean ridge environments, using both data and synthetic examples to investigate under what conditions the use of a priori 3D models is expected to result in improved location results and modifies interpretation. Furthermore, we introduce MATLAB tools that facilitate the creation of three-dimensional models suitable for earthquake location from refraction profiles, CRUST1 and SLAB1.0 and other model types.
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
Fault Interaction and Earthquake Migration in Mid-Continents: Insights from Numerical Modeling
NASA Astrophysics Data System (ADS)
Liu, M.; Lu, Y.; Chen, L.; Luo, G.; Wang, H.
2011-12-01
Historic records in North China and other mid-continents show large earthquakes migrating among widespread fault systems. Mechanical coupling of these faults is indicated by complimentary seismic moment release on these faults. In a conceptual model (Liu et al., 2011, Lithosphere), the long-distance fault interaction and earthquake migration are explained as the consequences of regional stress readjustment among a system of intraplate faults that collectively accommodates tectonic loading at the plate boundaries. In such a system, failure of one fault (a large earthquake) can cause stress shifting on all other faults. Here we report preliminary results of numerical investigations of such long-distance fault interaction in mid-continents. In a set of elastic models, we have a model crust with internal faults loaded from the boundaries, and calculate the stress distribution on the faults when the system reaches equilibrium. We compare the results with those of a new model that has one or more of the faults weakened (ruptured). The results show that failure of one fault can cause up to a few MPa of stress changes on other faults over a large distance; the magnitude of the stress change and the radius of the impacted area are much greater than those of the static Coulomb stress changes associated with dislocation on the fault plane. In time-dependent viscoelasto-plastic models, we found that variations of seismicity on one fault can significantly affect the loading rates on other faults that share the same tectonic loading. Similar fault interactions are also found in complex plate boundary fault systems, such as between the San Andreas Fault and the San Jacinto Fault in southern California. The spatially migrating earthquakes resulting from the long-distance fault interactions in mid-continents can cause different spatial patterns of seismicity when observed through different time-windows. These results have important implications for assessing earthquake hazards in
Family number non-conservation induced by the supersymmetric mixing of scalar leptons
Levine, M.J.S.
1987-08-01
The most egregious aspect of (N = 1) supersymmetric theories is that each particle state is accompanied by a 'super-partner', a state with identical quantum numbers save that it differs in spin by one half unit. For the leptons these are scalars and are called ''sleptons'', or scalar leptons. These consist of the charged sleptons (selectron, smuon, stau) and the scalar neutrinos ('sneutrinos'). We examine a model of supersymmetry with soft breaking terms in the electroweak sector. Explicit mixing among the scalar leptons results in a number of effects, principally non-conservation of lepton family number. Comparison with experiment permits us to place constraints upon the model. 49 refs., 12 figs.
Physics-Based Predictive Simulation Models for Earthquake Generation at Plate Boundaries
NASA Astrophysics Data System (ADS)
Matsu'Ura, M.
2002-12-01
In the last decade there has been great progress in the physics of earthquake generation; that is, the introduction of laboratory-based fault constitutive laws as a basic equation governing earthquake rupture and the quantitative description of tectonic loading driven by plate motion. Incorporating a fault constitutive law into continuum mechanics, we can develop a physics-based_@simulation model for the entire earthquake generation process. For realistic simulation of earthquake generation, however, we need a very large, high-speed computer system. In Japan, fortunately, the Earth Simulator, which is a high performance, massively parallel-processing computer system with 10 TB memories and 40 TFLOPS peak speed, has been completed. The completion of the Earth Simulator and advance in numerical simulation methodology are bringing our vision within reach. In general, the earthquake generation cycle consists of tectonic loading due to relative plate motion, quasi-static rupture nucleation, dynamic rupture propagation and stop, and restoration of fault strength. The basic equations governing the entire earthquake generation cycle consists of an elastic/viscoelastic slip-response function that relates fault slip to shear stress change and a fault constitutive law that prescribes change in shear strength with fault slip and contact time. The shear stress and the shear strength are related with each other through the boundary conditions on the fault. The driving force of this system is observed relative plate motion. The system to describe the earthquake generation cycle is conceptually quite simple. The complexity in practical modelling mainly comes from complexity in structure of the real earth. Since 1998 our group have conducted the Crustal Activity Modelling Program (CAMP), which is one of the three main programs composing the Solid Earth Simulator project. The aim of CAMP is to develop a physics-based predictive simulation model for the entire earthquake generation
Statistical Analysis of the Surface Slip Profiles and Slip Models for the 2008 Wenchuan Earthquake
NASA Astrophysics Data System (ADS)
Lavallee, D.; Shao, G.; Ji, C.
2009-12-01
The 2008 Wenchuan earthquake provides a remarkable opportunity to study the statistical properties of slip profiles recorded at the surface. During the M 8 Wenchuan earthquake, the surface ruptured over 300 km along the Longmenshan fault system. The surface slip profiles have been measured along the fault for a distance of the order of 270 km without any significant change in the strike direction. Field investigations suggest that the earthquake generated a 240 km surface rupture along the Beichuan segment and 72 km surface rupture along the Guanxian segment. Maximum vertical and horizontal slip of 10 m and 4.9 m have been observed along the Beichuan fault. Measurements include the displacement parallel and perpendicular to the fault as well as the width of the rupture zone. However, the recorded earthquake slip profiles are irregularly sampled. Traditional algorithms used to compute the discrete Fourier transform are developed for data sampled at regularly spaced intervals. It should be noted that interpolating the slip profile over a regular grid is not appropriate when investigating the spectrum functional behavior or when computing the discrete Fourier transform. Interpolation introduces bias in the estimation of the Fourier transform that adds artificial correlation to the original data. To avoid this problem, we developed an algorithm to compute the Fourier transform of irregularly sampled data. It consists essentially in determining the coefficients that best fit the data to the Sine and Cosine functions at a given wave number. We compute the power spectrum of the slip profiles of the Wenchuan earthquakes. In addition, we also compute the power spectrum for the slip inversions computed for the Wenchuan earthquakes. To model the functional behavior of the spectrum curves, we consider two functions: the power law function and the von Karman function. For all the slip models, we compute the parameters of the power law function and the von Karman function that
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Yeh, Te-Yang; Huang, Hsin-Hua; Lin, Cheng-Horng
2015-11-01
On 27 March and 2 June 2013, two large earthquakes with magnitudes of ML 6.2 and ML 6.5, named the Nantou earthquake series, struck central Taiwan. These two events were located at depths of 15-20 km, which implied that the mid-crust of central Taiwan is an active seismogenic area even though the subsurface structures have not been well established. To determine the origins of the Nantou earthquake series, we investigated both the rupture processes and seismic wave propagations by employing inverse and forward numerical simulation techniques. Source inversion results indicated that one event ruptured from middle to shallow crust in the northwest direction, while the other ruptured towards the southwest. Simulations of 3-D wave propagation showed that the rupture characteristics of the two events result in distinct directivity effects with different amplified shaking patterns. From the results of numerical earthquake modeling, we deduced that the occurrence of the Nantou earthquake series may be related to stress release from the easternmost edge of a preexistent strong basement in central Taiwan.
NASA Astrophysics Data System (ADS)
Ke, M. C.
2015-12-01
Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.
NASA Astrophysics Data System (ADS)
Kawada, Y.; Nagahama, H.; Omori, Y.; Yasuoka, Y.; Shinogi, M.
2006-12-01
Accelerated moment release is often preceded by large earthquakes, and defined by rate of cumulative Benioff strain following power-law time-to-failure relation. This temporal seismicity pattern is investigated in terms of irreversible thermodynamics model. The model is regulated by the Helmholtz free energy defined by the macroscopic stress-strain relation and internal state variables (generalized coordinates). Damage and damage evolution are represented by the internal state variables. In the condition, huge number of the internal state variables has each specific relaxation time, while a set of the time evolution shows a temporal power-law behavior. The irreversible thermodynamic model reduces to a fiber-bundle model and experimentally-based constitutive law of rocks, and predicts the form of accelerated moment release. Based on the model, we can also discuss the increase in atmospheric radon concentration prior to the 1995 Kobe earthquake.
Under the hood of the earthquake machine: toward predictive modeling of the seismic cycle.
Barbot, Sylvain; Lapusta, Nadia; Avouac, Jean-Philippe
2012-05-11
Advances in observational, laboratory, and modeling techniques open the way to the development of physical models of the seismic cycle with potentially predictive power. To explore that possibility, we developed an integrative and fully dynamic model of the Parkfield segment of the San Andreas Fault. The model succeeds in reproducing a realistic earthquake sequence of irregular moment magnitude (M(w)) 6.0 main shocks--including events similar to the ones in 1966 and 2004--and provides an excellent match for the detailed interseismic, coseismic, and postseismic observations collected along this fault during the most recent earthquake cycle. Such calibrated physical models provide new ways to assess seismic hazards and forecast seismicity response to perturbations of natural or anthropogenic origins. PMID:22582259
Visualizing the 2009 Samoan and Sumatran Earthquakes using Google Earth-based COLLADA models
NASA Astrophysics Data System (ADS)
de Paor, D. G.; Brooks, W. D.; Dordevic, M.; Ranasinghe, N. R.; Wild, S. C.
2009-12-01
Earthquake hazards are generally analyzed by a combination of graphical focal mechanism or centroid moment tensor solutions (aka geophysical beach balls), contoured fault plane maps, and shake maps or tsunami damage surveys. In regions of complex micro-plate tectonics, it can be difficult to visualize spatial and temporal relations among earthquakes, aftershocks, and associated tectonic and volcanic structures using two-dimensional maps and cross sections alone. Developing the techniques originally described by D.G. De Paor & N.R. Williams (EOS Trans. AGU S53E-05, 2006), we can view the plate tectonic setting, geophysical parameters, and societal consequences of the 2009 Samoan and Sumatran earthquakes on the Google Earth virtual globe. We use xml-based COLLADA models to represent the subsurface structure and standard KML to overlay map data on the digital terrain model. Unlike traditional geophysical beach ball figures, our models are three dimensional and located at correct depth, and they optionally show nodal planes which are useful in relating the orientation of one earthquake to the hypo-centers of its neighbors. With the aid of the new Google Earth application program interface (GE API), we can use web page-based Javascript controls to lift structural models from the subsurface in Google Earth and generate serial sections along strike. Finally, we use the built-in features of the Google Earth web browser plug-in to create a virtual tour of damage sites with hyperlinks to web-based field reports. These virtual globe visualizations may help complement existing KML and HTML resources of the USGS Earthquake Hazards Program and The Global CMT Project.
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
NASA Astrophysics Data System (ADS)
Dempsey, David; Suckale, Jenny
2016-05-01
Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.
NASA Astrophysics Data System (ADS)
Takano, Kazutomo; Kimata, Fumiaki
2013-09-01
The ground deformation and fault slip model for the 1891 M 8.0 Nobi earthquake, central Japan, have been reexamined. The Nobi earthquake appears to have occurred mainly due to the rupture of three faults: Nukumi, Neodani, and Umehara. Since triangulation and leveling had been performed around the Umehara fault, the two geodetic datasets from 1885-1890 and 1894-1908 have been reevaluated. Maximum coseismic horizontal displacements of 1.7 m were detected to the south of the Neodani fault. A fault model of the Nobi earthquake was estimated from the geodetic datasets, taking into account the geometry of the fault planes based on the known surface ruptures. The best fit to the data was obtained from three and four divided fault segments running along the Nukumi, Neodani, and Umehara faults; although, in past studies, the Gifu-Ichinomiya line has been suggested as a buried fault to explain the ground deformation. The detected ground deformation can be well reproduced using a slip model for the Umehara fault, dipping at 61° toward the southwest, with a maximum slip of 3.8 m in the deeper northwestern segment. As this model suitably explains the coseismic deformation, the earthquake source fault does not appear to extend to the Gifu-Ichinomiya line.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Yeh, Te-Yang
2015-04-01
On 27 March and 2 June in 2013, two moderate large earthquakes with magnitude ML 6.2 and ML 6.5, named the Nantou earthquake series, struck the Central Taiwan. These two events located in middle-to-deep crust at about 15-20 km and their epicenter were very close to the historic Nantou earthquake series which cause destructive damages in 1916-1917. These events indicate that the deep crust of Central Taiwan is an active seismogenic area even there is no evidence show a subsurface structure directly related to any faults at surface. In order to know the origins of Nantou earthquake series and their influence of strong ground shakings, we investigated the rupture processes and seismic wave propagations by using inverse and forward numerical simulation techniques. First, joint source inversions were performed by using teleseismic body wave, GPS coseismic displacements and near field ground motion data. A 3D seismic wave propagation simulation was then carrying out based on the inverted source model. Source inversion results indicate that the rupture characteristics of these two events are different. One ruptures from deep to shallow crust in northwest direction, while the other ruptures to the southwest. Three dimensional wave propagation simulation results show that the thrust movement on eastern dipping fault planes of the two events result in distinct rupture directivity effects with different amplified shaking patterns in western Taiwan. From results of the numerical earthquake models, we deduce that the occurrence of Nantou earthquake series might be related to stress releasing from the easternmost edge of a preexistent strong basement under middle-to-deep crust in Central Taiwan.
Source models of great earthquakes from ultra low-frequency normal mode data
NASA Astrophysics Data System (ADS)
Lentas, K.; Ferreira, A. M. G.; Clévédé, E.; Roch, J.
2014-08-01
We present a new earthquake source inversion technique based on normal mode data for the simultaneous determination of the rupture duration, length and moment tensor of large earthquakes with unilateral rupture. We use ultra low-frequency (f <1 mHz) mode singlets and multiplets which are modelled using Higher Order Perturbation Theory (HOPT), taking into account the Earth’s rotation, ellipticity and lateral heterogeneities. A Monte Carlo exploration of the model space is carried out, enabling the assessment of source parameter tradeoffs and uncertainties. We carry out synthetic tests to investigate errors in the source inversions due to: (i) unmodelled 3-D Earth structure; (ii) noise in the data; (iii) uncertainties in spatio-temporal earthquake location; and, (iv) neglecting the source finiteness in point source inversions. We find that unmodelled 3-D structure is the most serious source of errors for rupture duration and length determinations especially for the lowest magnitude events. The errors in moment magnitude and fault mechanism are generally small, with the rake angle showing systematically larger errors (up to 20°). We then investigate five real thrust earthquakes (Mw⩾8.5): (i) Sumatra-Andaman (26th December 2004); (ii) Nias, Sumatra (28th March 2005); (iii) Bengkulu (12th September 2007); (iv) Tohoku, Japan (11th March 2011); (v) Maule, Chile (27th February 2010); and, (vi) the 24 May 2013 Mw 8.3 Okhotsk Sea, Russia, deep (607 km) event. While finite source inversions for rupture length, duration, magnitude and fault mechanism are possible for the Sumatra-Andaman and Tohoku events, for all the other events their lower magnitudes only allow stable point source inversions of mode multiplets. We obtain the first normal mode finite source model for the 2011 Tohoku earthquake, which yields a fault length of 461 km, a rupture duration of 151 s, and hence an average rupture velocity of 3.05 km/s, giving an independent confirmation of the compact nature of
Viscoelastic-coupling model for the earthquake cycle driven from below
Savage, J.C.
2000-01-01
In a linear system the earthquake cycle can be represented as the sum of a solution which reproduces the earthquake cycle itself (viscoelastic-coupling model) and a solution that provides the driving force. We consider two cases, one in which the earthquake cycle is driven by stresses transmitted along the schizosphere and a second in which the cycle is driven from below by stresses transmitted along the upper mantle (i.e., the schizosphere and upper mantle, respectively, act as stress guides in the lithosphere). In both cases the driving stress is attributed to steady motion of the stress guide, and the upper crust is assumed to be elastic. The surface deformation that accumulates during the interseismic interval depends solely upon the earthquake-cycle solution (viscoelastic-coupling model) not upon the driving source solution. Thus geodetic observations of interseismic deformation are insensitive to the source of the driving forces in a linear system. In particular, the suggestion of Bourne et al. [1998] that the deformation that accumulates across a transform fault system in the interseismic interval is a replica of the deformation that accumulates in the upper mantle during the same interval does not appear to be correct for linear systems.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Impact of Three-Parameter Weibull Models in Probabilistic Assessment of Earthquake Hazards
NASA Astrophysics Data System (ADS)
Pasari, Sumanta; Dikshit, Onkar
2014-07-01
This paper investigates the suitability of a three-parameter (scale, shape, and location) Weibull distribution in probabilistic assessment of earthquake hazards. The performance is also compared with two other popular models from same Weibull family, namely the two-parameter Weibull model and the inverse Weibull model. A complete and homogeneous earthquake catalog ( Yadav et al. in Pure Appl Geophys 167:1331-1342, 2010) of 20 events ( M ≥ 7.0), spanning the period 1846 to 1995 from north-east India and its surrounding region (20°-32°N and 87°-100°E), is used to perform this study. The model parameters are initially estimated from graphical plots and later confirmed from statistical estimations such as maximum likelihood estimation (MLE) and method of moments (MoM). The asymptotic variance-covariance matrix for the MLE estimated parameters is further calculated on the basis of the Fisher information matrix (FIM). The model suitability is appraised using different statistical goodness-of-fit tests. For the study area, the estimated conditional probability for an earthquake within a decade comes out to be very high (≥0.90) for an elapsed time of 18 years (i.e., 2013). The study also reveals that the use of location parameter provides more flexibility to the three-parameter Weibull model in comparison to the two-parameter Weibull model. Therefore, it is suggested that three-parameter Weibull model has high importance in empirical modeling of earthquake recurrence and seismic hazard assessment.
Source Model from ALOS-2 ScanSAR of the 2015 Nepal Earthquakes
NASA Astrophysics Data System (ADS)
Liu, Youtian; Ge, Linlin; Ng, Alex Hay-Man
2016-06-01
The 2015 Gorkha Nepal Earthquake sequence started with a magnitude Mw 7.8 main shock and continued with several large aftershocks, particularly the second major shock of Mw 7.3. Both earthquake events were captured using ALOS-2 ScanSAR images to determine the coseismic surface deformation and the source models. In this paper, the displacement maps were produced and the corresponding modelling results were discussed. The single fault model of the main shock suggests that there was nearly 6 m of right-lateral oblique slip motion with fault struck of 292° and dipped gently Northeast at 7°, indicating that the main shock was on a thrust fault. Moreover, a single fault model for the Mw 7.3 quake with striking of 312° and dipping of 11° was derived from observed result. Both results showed the fault planes struck generally to South and dipped northeast, which depicted the risks since the main shock occurred.
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Sokolov, V. Y.
2013-12-01
Ground shaking due to recent catastrophic earthquakes are estimated to be significantly higher than that predicted by a probabilistic seismic hazard analysis (PSHA). A reason is that extreme (large magnitude and rare) seismic events are not accounted in PSHA in the most cases due to the lack of information and unknown reoccurrence time of the extremes. We present a new approach to assessment of regional seismic hazard, which incorporates observed (recorded and historic) seismicity and modeled extreme events. We apply this approach to PSHA of the Tibet-Himalayan region. The large magnitude events simulated for several thousand years in models of lithospheric block-and-fault dynamics and consistent with the regional geophysical and geodetic data are employed together with the observed earthquakes for the Monte-Carlo PSHA. Earthquake scenarios are generated stochastically to sample the magnitude and spatial distribution of seismicity (observed and modeled) as well as the distribution of ground motion for each seismic event. The peak ground acceleration (PGA) values (that is, ground shaking at a site), which are expected to be exceeded at least once in 50 years with a probability of 10%, are mapped and compared to those PGA values observed and predicted earlier. The results show that the PGA values predicted by our assessment fit much better the observed ground shaking due to the 2008 Wenchuan earthquake than those predicted by conventional PSHA. Our approach to seismic hazard assessment provides a better understanding of ground shaking due to possible large-magnitude events and could be useful for risk assessment, earthquake engineering purposes, and emergency planning.
REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION
Walter, W R; Pasyanos, M E; Matzel, E; Gok, R; Sweeney, J; Ford, S R; Rodgers, A J
2008-07-08
We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing seismic
Vortex motion of dust particles due to non-conservative ion drag force in a plasma
NASA Astrophysics Data System (ADS)
Chai, Kil-Byoung; Bellan, Paul M.
2016-02-01
Vortex motion of the dust in a dusty plasma is shown to result because non-parallelism of the ion density gradient and the gradient of the magnitude of the ion ambipolar velocity cause the ion drag force on dust grains to be non-conservative. Dust grain poloidal vortices consistent with the model predictions are experimentally observed, and the vortices change character with imposed changes in the ion temperature profile as predicted. For a certain ion temperature profile, two adjacent co-rotating poloidal vortices have a well-defined X-point analogous to the X-point in magnetic reconnection.
Models of recurrent strike-slip earthquake cycles and the state of crustal stress
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.
1991-01-01
Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.
Evaluation of CAMEL - comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
A new comprehensive areal model of earthquake-induced landslides (CAMEL) has been developed to assist in planning decisions related to disaster risk reduction. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using fuzzy logic systems and geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL has been empirically evaluated with respect to disrupted landslides (Category I) using a case study of the 1989 M = 6.9 Loma Prieta, CA earthquake. In this case, CAMEL performs best in comparison to disrupted slides and falls in soil. For disrupted rock fall and slides, CAMEL's performance was slightly poorer. The model predicted a low occurrence of rock avalanches, when none in fact occurred. A similar comparison with the Loma Prieta case study was also conducted using a simplified Newmark displacement model. The area under the curve method of evaluation was used in order to draw comparisons between both models, revealing improved performance with CAMEL. CAMEL should not however be viewed as a strict alternative to Newmark displacement models. CAMEL can be used to integrate Newmark displacements with other, previously incompatible, types of knowledge. ?? 2008 Elsevier B.V.
Characterization and stochastic modeling of earthquake faulting in California. Final Report
Kiremidjian, A.S.; Lutz, K.A.; Thrainsson, H.
1995-06-01
The objective of this report is to develop a time- and space-dependent probabilistic earthquake occurrence model for seismic hazard analysis. In order to study the space and time behavior of earthquakes along major faults, project investigators first evaluated slip rate and event interarrival time data for the San Andreas fault. These data were considered in the context of a model of the fault comprised of a series of segments that can rupture either independently or together with other segments. In Part One of this report, a slip-predictable model with random slip was used to generate probabilities of occurrences for all segments of the fault. In Part Two, a generalized semi-Markov model was developed that describes the temporal and spatial dependence of seismic events. Using the first model, investigators found large probabilities of occurrence of magnitude 6.5 or greater earthquakes for many segments of the fault. Using the second model, investigators found that the North Coast and South Santa Cruz Mountains segments of the fault typically generate quakes of magnitude 7.8 to 8.2 and 6.9 to 7.2 respectively.
Source model and ground shaking of the 2015 Gorkha, Nepal Mw7.8 earthquake
NASA Astrophysics Data System (ADS)
Wei, S.; Wang, T.; Lindsey, E. O.; Avouac, J. P.; Graves, R. W.; Hubbard, J.; Hill, E.; Barbot, S.; Tapponnier, P.; Karakas, C.; Helmberger, D. V.
2015-12-01
The 2015 Mw7.8 Gorkha, Nepal earthquake ruptured a previously locked portion of the Main Himalayan Thrust fault (MHT) that has not slipped in a large event since 1833 (Mw7.6). The earthquake was well recorded by geodetic (SAR, InSAR and GPS) and seismic instruments. In particular, high-rate (5Hz) GPS channels provide waveform records at local distances, with three stations located directly above the major asperities of the earthquake. Here we derive a kinematic rupture model of the earthquake by jointly inverting the seismic and geodetic data, using a Kostrov source time function with variable rise times. Our inversion result indicates that the earthquake had a weak initiation and ruptured unilaterally along strike towards the ESE, with an average rupture speed of 3.0km/s and total duration of ~60s. The preferred slip rate of the beginning portion of the rupture had a longer rise time compared with the strongest ruptures, which took place at ~22s and ~35s after the origin, located 30km to the northwest and 20km to the east of the Kathmandu valley, respectively. The horizontal vibration and amplification of ground shaking in the valley was well recorded by one of the GPS stations (NAST) and the accelerometric station (KANTP), with a dominant frequency of 0.25Hz. A simplified basin model with top shear wave speed of 250 m/s and geometry constrained by a previous micro-tremor study can largely explain the amplification and vibration, realized by 3D staggered-grid finite difference simulations. This study shows that ground shaking can be strongly affected by complexities of the rupture and velocity structure.
Fault modeling of the 2012 Wutai, Taiwan earthquake and its tectonic implications
NASA Astrophysics Data System (ADS)
Chiang, Pan-Hsin; Hsu, Ya-Ju; Chang, Wu-Lung
2016-01-01
The Mw 5.9 Wutai earthquake of 26 February 2012 occurred at a depth of 26 km in southern Taiwan, where the rupture is not related to any known geologic structures. To illustrate the rupture source of the mainshock, we employ an elastic half-space model and GPS coseismic displacements to invert for optimal fault geometry and coseismic slip distribution. With observations of both coseismic horizontal and vertical displacements less than 10 mm, our preferred fault model strikes 312° and dips 30° to the northeast and exhibits a reverse slip of 28-112 mm and left-lateral slip of 9-45 mm. Estimated geodetic moment of the Wutai earthquake is 1.3 × 1018 N-m, equivalent to an Mw 6.0 earthquake. The Wutai epicentral area is characterized by a NE-SW compression as evidenced by the slaty cleavage orientations and the interpretation of stress tensor inversion of earthquake focal mechanisms. Using the stress drops of the Wutai and the nearby 2010 Mw 6.4 Jiashian earthquakes, we obtain a lower bound of ~ 0.002 for the coefficient of friction on the fault. On the other hand, studying the crustal thickness contrast in southern Taiwan provides an upper bound of the average horizontal compressive force of 1.67 × 1012 N/m transmitted through the Taiwan mountain belt and gives an estimate of the maximum friction coefficient for 0.03. The deviation of an order of magnitude difference between the upper and lower bounds for the coefficient of friction suggests that other fault systems may support substantial differential stress in the lithosphere as well.
NASA Astrophysics Data System (ADS)
Cavers, M. S.; Vasudevan, K.
2015-10-01
Directed graph representation of a Markov chain model to study global earthquake sequencing leads to a time series of state-to-state transition probabilities that includes the spatio-temporally linked recurrent events in the record-breaking sense. A state refers to a configuration comprised of zones with either the occurrence or non-occurrence of an earthquake in each zone in a pre-determined time interval. Since the time series is derived from non-linear and non-stationary earthquake sequencing, we use known analysis methods to glean new information. We apply decomposition procedures such as ensemble empirical mode decomposition (EEMD) to study the state-to-state fluctuations in each of the intrinsic mode functions. We subject the intrinsic mode functions, derived from the time series using the EEMD, to a detailed analysis to draw information content of the time series. Also, we investigate the influence of random noise on the data-driven state-to-state transition probabilities. We consider a second aspect of earthquake sequencing that is closely tied to its time-correlative behaviour. Here, we extend the Fano factor and Allan factor analysis to the time series of state-to-state transition frequencies of a Markov chain. Our results support not only the usefulness of the intrinsic mode functions in understanding the time series but also the presence of power-law behaviour exemplified by the Fano factor and the Allan factor.
NASA Astrophysics Data System (ADS)
Castaldo, Raffaele; Tizzani, Pietro
2016-04-01
Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally
NASA Astrophysics Data System (ADS)
Liu, Y.; McGuire, J. J.; Behn, M. D.
2013-12-01
We use a three-dimensional strike-slip fault model in the framework of rate and state-dependent friction to investigate earthquake behavior and scaling relations on oceanic transform faults (OTFs). Gabbro friction data under hydrothermal conditions are mapped onto OTFs using temperatures from (1) a half-space cooling model, and (2) a thermal model that incorporates a visco-plastic rheology, non-Newtonian viscous flow and the effects of shear heating and hydrothermal circulation. Without introducing small-scale frictional heterogeneities on the fault, our model predicts that an OTF segment can transition between seismic and aseismic slip over many earthquake cycles, consistent with the multimode hypothesis for OTF ruptures. The average seismic coupling coefficient χ is strongly dependent on the ratio of seismogenic zone width W to earthquake nucleation size h*; χ increases by four orders of magnitude as W/h* increases from ~ 1 to 2. Specifically, the average χ = 0.15 +/- 0.05 derived from global OTF earthquake catalogs can be reached at W/h* ≈ 1.2-1.7. The modeled largest earthquake rupture area is less than the total seismogenic area and we predict a deficiency of large earthquakes on long transforms, which is also consistent with observations. Earthquake magnitude and distribution on the Gofar (East Pacific Rise) and Romanche (equatorial Mid-Atlantic) transforms are better predicted using the visco-plastic model than the half-space cooling model. We will also investigate how fault gouge porosity variation during an OTF earthquake nucleation phase may affect the seismic wave velocity structure, for which up to 3% drop was observed prior to the 2008 Mw6 Gofar earthquake.
The 2015, Mw 6.5, Leucas (Ionian Sea, Greece) earthquake: Seismological and Geodetic Modelling
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Taymaz, Tuncay; Yolsal-Çevikbilen, Seda; Eken, Tuna; Moschas, Fanis; Stiros, Stathis
2016-04-01
A cluster of earthquakes (6
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an
Equivalent Body Force Finite Elements Method and 3-D Earth Model Applied In 2004 Sumatra Earthquake
NASA Astrophysics Data System (ADS)
Qu, W.; Cheng, H.; Shi, Y.
2015-12-01
The 26 December 2004 Sumatra-Andaman earthquake with moment magnitude (Mw) of 9.1 to 9.3 is the first great earthquake recorded by digital broadband, high-dynamic-range seismometers and global positioning system (GPS) equipment, which recorded many high-quality geophysical data sets. The spherical curvature is not negligible in far field especially for large event and the real Earth is laterally inhomogeneity and the analytical results still are difficult to explain the geodetic measurements. We use equivalent body force finite elements method Zhang et al. (2015) and mesh the whole earth, to compute global co-seismic displacements using four fault slip models of the 2004 Sumatra earthquake provided by different authors. Comparisons of calculated co-seismic displacements and GPS show that the confidences are well in near field for four models, and the confidences are according to different models. In the whole four models, the Chlieh model (Chlieh et al., 2007) is the best as this slip model not only accord well with near field data but also far field data. And then we use the best slip model, Chlieh model to explore influence of three dimensional lateral earth structure on both layered spherically symmetric (PREM) and real 3-D heterogeneous earth model (Crust 1.0 model and GyPSuM). Results show that the effects of 3-D heterogeneous earth model are not negligible and decrease concomitantly with increasing distance from the epicenter. The relative effects of 3-D crust model are 23% and 40% for horizontal and vertical displacements, respectively. The effects of the 3-D mantle model are much smaller than that of 3-D crust model but with wider impacting area.
Stability of massless non-conservative elastic systems
NASA Astrophysics Data System (ADS)
Ingerle, Kurt
2013-09-01
The critical forces of massless conservative systems can be calculated using equilibrium conditions on the deformed systems. However, if non-trivial equilibrium conditions are absent (e.g., Beck's and Ziegler's column), then the equilibrium method is not applicable. Therefore, these systems were supplied with mass and calculations were performed dynamically. This manuscript demonstrates that the static stability of massless non-conservative systems can be calculated using the energetic stability criterion.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
NASA Astrophysics Data System (ADS)
Ferreira, Ana Mg; Vallee, Martin; Charlety, Jean
2010-05-01
Accurate earthquake point source parameters (e.g. seismic moment, depth and focal mechanism) provide key first-order information for detailed studies of the earthquake source process and for improved seismic and tsunami hazard evaluation. In order to objectively assess the quality of seismic source models, it is important to go beyond classical resolution checks. In particular, it is desirable to apply sophisticated modelling techniques to quantify inaccuracies due to simplified theoretical formulations and/or Earth structure employed to build the source models. Moreover, it is important to verify how well the models explain data not used in their construction. In this study we assess the quality of the SCARDEC method (see joint abstracts), which is a new automated technique that retrieves simultaneously the seismic moment, focal mechanism, depth and source time functions of large earthquakes. Because the SCARDEC method is based on body-wave deconvolution using ray methods in a 1D Earth model, we test how well SCARDEC source parameters explain long-period seismic data (surface waves and normal modes). We calculate theoretical seismograms using two forward modelling techniques (full ray theory and spectral element method) to simulate the long-period seismic wavefield for the 3D Earth model S20RTS combined with the crust model CRUST2.0, and for two point source models: (i) the SCARDEC model; and (ii) the Global CMT model. We compare the synthetic seismograms with real broadband data from the FDSN for the major subduction earthquakes of the last 20 years. We show that SCARDEC source parameters explain long-period surface waves as well as Global CMT solutions. This can be explained by the fact that most of the differences between SCARDEC and Global CMT solutions are linked to correlated variations of the seismic moment and dip of the earthquakes, and it is theoretically known that for shallow earthquakes it is difficult to accurately resolve these two parameters using
A detailed seismic zonation model for shallow earthquakes in the broader Aegean area
NASA Astrophysics Data System (ADS)
Vamvakaris, D. A.; Papazachos, C. B.; Papaioannou, Ch. A.; Scordilis, E. M.; Karakaisis, G. F.
2016-01-01
In the present work we propose a new seismic zonation model of area type sources for the broader Aegean area, which can be readily used for seismic hazard assessment. The definition of this model is based not only on seismicity information but incorporates all available seismotectonic and neotectonic information for the study area, in an attempt to define zones which show not only a rather homogeneous seismicity release but also exhibit similar active faulting characteristics. For this reason, all available seismological information such as fault plane solutions and the corresponding kinematic axes have been incorporated in the analysis, as well as information about active tectonics, such as seismic and active faults. Moreover, various morphotectonic features (e.g. relief, coastline) were also considered. Finally, a revised seismic catalogue is employed and earthquake epicentres since historical times (550 BC-2008) are employed, in order to define areas of common seismotectonic characteristics, that could constitute a discrete seismic zone. A new revised model of 113 earthquake seismic zones of shallow earthquakes for the broader Aegean area is finally proposed. Using the proposed zonation model, a detailed study is performed for the catalogue completeness for the recent instrumental period.Using the defined completeness information, seismicity parameters (such as G-R values) for the 113 new seismic zones have been calculated, and their spatial distribution was also examined. The spatial variation of the obtained b values shows an excellent correlation with the geotectonic setting in the area, in good agreement with previous studies. Moreover, a quantitative estimation of seismicity is performed in terms of the mean return period, Tm, of large (M ≥ 6.0) earthquakes, as well as the most frequent maximum magnitude, Mt, for a typical time period (T = 50 yr), revealing significant spatial variations of seismicity levels within the study area. The new proposed
Triggering processes of earthquake bursts in Japan: evidence from statistical modeling
NASA Astrophysics Data System (ADS)
Chen, X.; Kato, A.
2013-12-01
We search for spatial-temporal isolated earthquake bursts across Japan using the JMA catalog from 2000 to 2013. For each identified burst, we obtain a set of parameters, which include: Δσquasi (ratio between total moment release and volume of the burst), tmax, duration, radius, planarity and dip. A total of 290 bursts are identified, and 90 bursts exhibit 'repeating-like' feature: they tend to occur within 2 km of at least one other burst. Bursts with tmax ≥ 0.05 exhibit significantly longer duration and lower Δσquasi. To understand the temporal evolution of possible external stressing rate change, we select 18 areas through examination of 'repeating' bursts, and apply ETAS model to all earthquakes in each area with magnitude ≥ Mc (local). We compare models with constant background seismicity rate μ0 and time varying μ(t), the latter general produce higher likelihood (better fit to observations). All the 18 areas feature high background seismicity fraction, ranging from 37% to 91%. Variations in background seismicity rate range 1-to-4 orders of magnitude. Increased aftershock productivity α (range from 0.9 to 1.5) is generally observed for models with μ(t). For earthquakes within the Izu-Tobu volcanic area and during the 2000 Miyakijima eruption, extremely fast Omori's-law aftershock decay (p > 3) and high background fraction (≥ 90%) are observed. Seismicity in the two areas is almost entirely related to dike intrusion processes with very little earthquake interaction, and the high p-value may relate to the strong stress heterogeneity or temperature. The background seismicity rates in the 18 areas are usually superimposition of smooth-shaped slow transient process and pulse-like sudden onset with exponential decay. For comparison, we obtain ETAS parameters for six shallow crustal mainshock-aftershock sequences with Mw ≥ 6.5, and include earthquakes prior to mainshocks for modeling. These sequences all feature higher aftershock productivity (α>2
Using GPS to Rapidly Detect and Model Earthquakes and Transient Deformation Events
NASA Astrophysics Data System (ADS)
Crowell, Brendan W.
The rapid modeling and detection of earthquakes and transient deformation is a problem of extreme societal importance for earthquake early warning and rapid hazard response. To date, GPS data is not used in earthquake early warning or rapid source modeling even in Japan or California where the most extensive geophysical networks exist. This dissertation focuses on creating algorithms for automated modeling of earthquakes and transient slip events using GPS data in the western United States and Japan. First, I focus on the creation and use of high-rate GPS and combined seismogeodetic data for applications in earthquake early warning and rapid slip inversions. Leveraging data from earthquakes in Japan and southern California, I demonstrate that an accurate magnitude estimate can be made within seconds using P wave displacement scaling, and that a heterogeneous static slip model can be generated within 2-3 minutes. The preliminary source characterization is sufficiently robust to independently confirm the extent of fault slip used for rapid assessment of strong ground motions and improved tsunami warning in subduction zone environments. Secondly, I investigate the automated detection of transient slow slip events in Cascadia using daily positional estimates from GPS. Proper geodetic characterization of transient deformation is necessary for studies of regional interseismic, coseismic and postseismic tectonics, and miscalculations can affect our understanding of the regional stress field. I utilize the relative strength index (RSI) from financial forecasting to create a complete record of slow slip from continuous GPS stations in the Cascadia subduction zone between 1996 and 2012. I create a complete history of slow slip across the Cascadia subduction zone, fully characterizing the timing, progression, and magnitude of events. Finally, using a combination of continuous and campaign GPS measurements, I characterize the amount of extension, shear and subsidence in the
Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models
Geist, Eric L.
2015-01-01
Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.
Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models
NASA Astrophysics Data System (ADS)
Geist, Eric L.
2016-02-01
Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ˜ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20-30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.
NASA Astrophysics Data System (ADS)
Reitman, N. G.; Briggs, R.; Gold, R. D.; DuRoss, C. B.
2015-12-01
Post-earthquake, field-based assessments of surface displacement commonly underestimate offsets observed with remote sensing techniques (e.g., InSAR, image cross-correlation) because they fail to capture the total deformation field. Modern earthquakes are readily characterized by comparing pre- and post-event remote sensing data, but historical earthquakes often lack pre-event data. To overcome this challenge, we use historical aerial photographs to derive pre-event digital surface models (DSMs), which we compare to modern, post-event DSMs. Our case study focuses on resolving on- and off-fault deformation along the Lost River fault that accompanied the 1983 M6.9 Borah Peak, Idaho, normal-faulting earthquake. We use 343 aerial images from 1952-1966 and vertical control points selected from National Geodetic Survey benchmarks measured prior to 1983 to construct a pre-event point cloud (average ~ 0.25 pts/m2) and corresponding DSM. The post-event point cloud (average ~ 1 pt/m2) and corresponding DSM are derived from WorldView 1 and 2 scenes processed with NASA's Ames Stereo Pipeline. The point clouds and DSMs are coregistered using vertical control points, an iterative closest point algorithm, and a DSM coregistration algorithm. Preliminary results of differencing the coregistered DSMs reveal a signal spanning the surface rupture that is consistent with tectonic displacement. Ongoing work is focused on quantifying the significance of this signal and error analysis. We expect this technique to yield a more complete understanding of on- and off-fault deformation patterns associated with the Borah Peak earthquake along the Lost River fault and to help improve assessments of surface deformation for other historical ruptures.
A post-seismic deformation model after the 2010 earthquakes in Latin America
NASA Astrophysics Data System (ADS)
Sánchez, Laura; Drewes, Hermann; Schmidt, Michael
2015-04-01
The Maule 2010 earthquake in Chile generated the largest displacements of geodetic observation stations ever observed in terrestrial reference systems. Coordinate changes came up to 4 meters, and deformations were measurable in distances up to more than 1000 km from the epicentre. The station velocities in the regions adjacent to the epicentre changed dramatically after the seism; while they were oriented eastward with approximately 2 cm/year before the event, they are now directed westward with about 1 cm/year. The 2010 Baja California earthquake in Mexico produced displacements in the decimetre level also followed by anomalous velocity changes. The main problem in geodetic applications is that there is no reliable reference system to be used practically in the region. For geophysical applications we have to redefine the tectonic structure in South America. The area south of 35° S … 40° S was considered as a stable part of the South American plate. Now we see that there are large and extended crustal deformations. The paper presents a new multi-year velocity model computed from the Geocentric Reference System of the Americas (SIRGAS) including only the four years after the seismic events (mid-2010 … mid-2014). These velocities are used to derive a continuous deformation model of the entire Latin American region from Mexico to Tierra de Fuego. The model is compared with the same velocity model for SIRGAS (VEMOS2009) before the earthquakes.
NASA Astrophysics Data System (ADS)
Borrero, Jose C.; Kalligeris, Nikos; Lynett, Patrick J.; Fritz, Hermann M.; Newman, Andrew V.; Convers, Jaime A.
2014-12-01
On 27 August 2012 (04:37 UTC, 26 August 10:37 p.m. local time) a magnitude M w = 7.3 earthquake occurred off the coast of El Salvador and generated surprisingly large local tsunami. Following the event, local and international tsunami teams surveyed the tsunami effects in El Salvador and northern Nicaragua. The tsunami reached a maximum height of ~6 m with inundation of up to 340 m inland along a 25 km section of coastline in eastern El Salvador. Less severe inundation was reported in northern Nicaragua. In the far-field, the tsunami was recorded by a DART buoy and tide gauges in several locations of the eastern Pacific Ocean but did not cause any damage. The field measurements and recordings are compared to numerical modeling results using initial conditions of tsunami generation based on finite-fault earthquake and tsunami inversions and a uniform slip model.
NASA Astrophysics Data System (ADS)
Toledo-Redondo, Sergio; Salinas, Alfonso; Fornieles, Jesús; Portí, Jorge
2013-04-01
Schumann resonances (SR) are global phenomena which occur within the Earth-ionosphere cavity. They are the result of waves propagating several turns around the Earth. Due to the dimensions of the cavity, SR belong to the ELF spectra. The main source of excitation is lightning, and several natural processes do modify the geometry of the cavity and its parameters, like for instance seismo-electromagnetic activity, atmospheric aerosols, solar radiation, etc. Therefore, SR are a promising tool for monitoring (and even forecasting) these natural events. Although several measurements seem to confirm the link between electromagnetic activity and earthquake precursors, the physical mechanisms which produce them are still not clear, and several possibilities have been proposed, like for instance piezoelectric effects on the rocks in the lithosphere, emanation of ionizing gasses like radon, or acoustic gravity waves modifying the properties of the ionosphere in the earthquake preparation zone. However, further measurements combined with analytical models and/or numerical simulations are required in order to better understand the seismo-electromagnetic activity. In this work, the whole Earth-ionosphere electromagnetic cavity has been modeled with 10 km accuracy, by means of Transmission-Line Modeling (TLM) method. Since Schumann resonance parameters depend primarily on the geometry of such cavity, electromagnetic changes produced by earthquake precursors can modify the properties of SR. There is not much quantitative information available about the changes produced by the precursors, either in the lithosphere, atmosphere, or ionosphere. Therefore, different models of the precursors are proposed and their consequences over the SR are evaluated. The so-called Chi-Chi earthquake is employed as a case of study.
NASA Astrophysics Data System (ADS)
Gelfenbaum, G. R.; La Selle, S.; Witter, R. C.; Sugawara, D.; Jaffe, B. E.
2015-12-01
Inferring the relative magnitude of tsunamis generated during earthquakes based on the characteristics of sandy coastal deposits is a challenging problem. Using a hydrodynamic and sediment transport model, we explore whether the volume of sandy tsunami deposits can be used to infer tsunami magnitude and seafloor deformation. For large subduction zone earthquakes specifically, we are testing the hypothesis that onshore tsunami deposit volume is correlated with nearshore tsunami wave height and coseismic slip. First, we test this hypothesis using onshore tsunami deposit volume data and offshore slip for the 2011 Tohoku earthquake and tsunami. This test considers tsunami deposit volume and offshore slip as they vary alongshore across a wide range of sediment sources, offshore and onshore slopes, and boundary roughness conditions. Preliminary analysis suggests that a strong correlation exists between onshore tsunami deposit volume and adjacent offshore coseismic slip, so long as ample sediment were available along the coast to be eroded. Second, we apply a Delft3D tsunami inundation and sediment transport model to Stardust Bay in the U.S. Aleutian Islands, where 6 tsunamis in the last ~1700 years deposited marine sand across a coastal plain as much as 800 m inland and up to ~15 m above mean sea level. The youngest sand sheet, probably deposited by a tsunami generated during the 1957 Andreanof Islands earthquake (Mw 8.6), has the smallest sediment volume. Several older deposits have larger volumes. Models show that ≥10 m of slip on the Aleutian subduction megathrust offshore of Stardust Bay could produce the onshore sediment volume measured for the 1957 deposit. Older tsunami deposits of greater volume require up to 14 m of megathrust slip. Model sensitivity studies show that onshore sediment volume is most sensitive to megathrust slip and less sensitive to other unknowns such as width of fault rupture and roughness of inundated terrain
Numerical model of the glacially-induced intraplate earthquakes and faults formation
NASA Astrophysics Data System (ADS)
Petrunin, Alexey; Schmeling, Harro
2016-04-01
According to the plate tectonics, main earthquakes are caused by moving lithospheric plates and are located mainly at plate boundaries. However, some of significant seismic events may be located far away from these active areas. The nature of the intraplate earthquakes remains unclear. It is assumed, that the triggering of seismicity in the eastern Canada and northern Europe might be a result of the glacier retreat during a glacial-interglacial cycle (GIC). Previous numerical models show that the impact of the glacial loading and following isostatic adjustment is able to trigger seismicity in pre-existing faults, especially during deglaciation stage. However this models do not explain strong glaciation-induced historical earthquakes (M5-M7). Moreover, numerous studies report connection of the location and age of major faults in the regions undergone by glaciation during last glacial maximum with the glacier dynamics. This probably imply that the GIC might be a reason for the fault system formation. Our numerical model provides analysis of the strain-stress evolution during the GIC using the finite volume approach realised in the numerical code Lapex 2.5D which is able to operate with large strains and visco-elasto-plastic rheology. To simulate self-organizing faults, the damage rheology model is implemented within the code that makes possible not only visualize faulting but also estimate energy release during the seismic cycle. The modeling domain includes two-layered crust, lithospheric mantle and the asthenosphere that makes possible simulating elasto-plastic response of the lithosphere to the glaciation-induced loading (unloading) and viscous isostatic adjustment. We have considered three scenarios for the model: horizontal extension, compression and fixed boundary conditions. Modeling results generally confirm suppressing seismic activity during glaciation phases whereas retreat of a glacier triggers earthquakes for several thousand years. Tip of the glacier
NASA Astrophysics Data System (ADS)
Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.
2010-12-01
HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region
A bilinear source-scaling model for M-log a observations of continental earthquakes
Hanks, T.C.; Bakun, W.H.
2002-01-01
The Wells and Coppersmith (1994) M-log A data set for continental earthquakes (where M is moment magnitude and A is fault area) and the regression lines derived from it are widely used in seismic hazard analysis for estimating M, given A. Their relations are well determined, whether for the full data set of all mechanism types or for the subset of strike-slip earthquakes. Because the coefficient of the log A term is essentially 1 in both their relations, they are equivalent to constant stress-drop scaling, at least for M ??? 7, where most of the data lie. For M > 7, however, both relations increasingly underestimate the observations with increasing M. This feature, at least for strike-slip earthquakes, is strongly suggestive of L-model scaling at large M. Using constant stress-drop scaling (???? = 26.7 bars) for M ??? 6.63 and L-model scaling (average fault slip u?? = ??L, where L is fault length and ?? = 2.19 × 10-5) at larger M, we obtain the relations M = log A + 3.98 ?? 0.03, A ??? 537 km2 and M = 4/3 log A + 3.07 ?? 0.04, A > 537 km2. These prediction equations of our bilinear model fit the Wells and Coppersmith (1994) data set well in their respective ranges of validity, the transition magnitude corresponding to A = 537 km2 being M = 6.71.
Testing Physical Models of Episodic Tremor and Slip with Earthquake and Creep Data
NASA Astrophysics Data System (ADS)
Gomberg, J.; Pratt, T.; Bodin, P.
2006-12-01
We propose that the existence or lack of temporal and spatial correlations between characteristics of earthquake and creep activity may provide constraints on frictional models developed to explain episodic tremor and aseismic slip (ETS). Frictional models that predict aseismic episodic slip and involve fluids and elevated pore pressures are qualitatively consistent with suggestions that fluids also play a role in episodic tremor. Published models of ETS invoke variations in frictional parameters that are static spatial features and/or result from temporal changes in pore-pressures that affect the frictional properties (e.g., see Liu and Rice, 2005). Frictional models predict that elevated pore pressures cause higher aftershock rates (Beeler et al., 2001), and when pore pressures are sufficiently elevated earthquakes will more easily occur on unfavorably oriented fault planes (Sibson, 1990). Different frictional regimes also predict different ambient seismicity rate characteristics (Boatwright and Cocco, 1996). If episodic tremor is associated with transient increases in fluid pressure, and both are inferred to occur throughout a volume tens of kilometers in width above the related episodic aseismic slip event (Kao et al., 2005), these changes may be observable as temporal variations in seismic velocities in tomographic images derived from earthquake travel-times. ETS has been observed mostly in subduction zones, and models focus on explaining episodic aseismic slip that occurs down-dip of the locked portion of subduction interface faults. Similar frictional models have been invoked to explain shallow fault creep on crustal strike-slip faults, both as steady creep and as episodic slip events (Marone, et al. 1988). Thus, we also examine observations of such creep, and the potential for contemporary tremor, for lessons about the conditions leading to ETS. Beeler, N. M., Gomberg, J., Blanpied, M. L., Marone, C., and Richardson, E (2001), Compaction-induced pore
NASA Astrophysics Data System (ADS)
Sri Lakshmi, S.; Tiwari, R. K.
2009-02-01
This study utilizes two non-linear approaches to characterize model behavior of earthquake dynamics in the crucial tectonic regions of Northeast India (NEI). In particular, we have applied a (i) non-linear forecasting technique to assess the dimensionality of the earthquake-generating mechanism using the monthly frequency earthquake time series (magnitude ⩾4) obtained from NOAA and USGS catalogues for the period 1960-2003 and (ii) artificial neural network (ANN) methods—based on the back-propagation algorithm (BPA) to construct the neural network model of the same data set for comparing the two. We have constructed a multilayered feed forward ANN model with an optimum input set configuration specially designed to take advantage of more completely on the intrinsic relationships among the input and retrieved variables and arrive at the feasible model for earthquake prediction. The comparative analyses show that the results obtained by the two methods are stable and in good agreement and signify that the optimal embedding dimension obtained from the non-linear forecasting analysis compares well with the optimal number of inputs used for the neural networks. The constructed model suggests that the earthquake dynamics in the NEI region can be characterized by a high-dimensional chaotic plane. Evidence of high-dimensional chaos appears to be associated with "stochastic seasonal" bias in these regions and would provide some useful constraints for testing the model and criteria to assess earthquake hazards on a more rigorous and quantitative basis.
NASA Astrophysics Data System (ADS)
Glasscoe, Margaret T.; Wang, Jun; Pierce, Marlon E.; Yoder, Mark R.; Parker, Jay W.; Burl, Michael C.; Stough, Timothy M.; Granat, Robert A.; Donnellan, Andrea; Rundle, John B.; Ma, Yu; Bawden, Gerald W.; Yuen, Karen
2015-08-01
Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). This in turn is delivered through standards-compliant web services for desktop and hand-held devices.
NASA Astrophysics Data System (ADS)
Moore, J. C.; Chester, F. M.; Plank, T. A.; Polissar, P. J.; Savage, H. M.
2013-12-01
At its deformation front, the Tohoku Earthquake's displacement was about 50 m. Slip was localized along a decollement consisting of pelagic brown clay. The brown clay of the Tohoku decollement at Site C0019 correlates lithologically with a lower portion of the pelagic brown clay section of Site 436, the 'reference' drill site closest to Site C0019 on the subducting Pacific Plate. The brown clay at Site 436 is of Eocene age, is dominated by smectite and illite, is extremely fine-grained, and unconformably overlies Cretaceous cherty mudstone. Similar pelagic clay layers occur at drill sites to the east and northeast of the Japan Trench. Pelagic clay may have been the slip surface of the 1896 Sanriku tsunami earthquake, located just north of the Tohoku earthquake along the Japan Trench. Both the Tohoku and Sanriku earthquakes may have also been facilitated by the lack of large seamounts on the incoming Pacific Plate east of the Japan Trench. Moreover, the relatively thin sedimentary section overlying the pelagic clays in the Japan Trench may be offscraped and enable decollement development in the weak pelagic clay; this may not occur where a thick incoming terrigenous section offers options for decollement development at depth above the pelagic clays. Backtracking the plate motions of Sites 436 and C0019 shows that they initiate in the southern Pacific Ocean, accumulate concentrations of siliceous sediments crossing beneath the equatorial upwelling zone, enter the central north Pacific desert of pelagic clay deposition, and finally approach the margin of Japan where the upper section of Neogene, terrigenous-ashy-siliceous sediments dominate. Pelagic clay deposits are common in central, deep portions of the oceans that are shielded from terrigenous input. Thus, subduction of relatively smooth oceanic crust overlain by pelagic clay with a modest overburden of younger sediments may potentially foster a tsunami earthquake.
Nonconservation of momentum in classical mechanics
NASA Astrophysics Data System (ADS)
Lee, Chunghyoung
Pérez Laraudogoitia (1996) presented an isolated system of infinitely many particles with infinite total mass whose total classical energy and momentum are not necessarily conserved in some particular inertial frame of reference. With a more generalized model Atkinson (2007) proved that a system of infinitely many balls with finite total mass may evolve so that its total classical energy and total relativistic energy and momentum are not conserved in any inertial frame of reference, and yet concluded that its total classical momentum is necessarily conserved. Contrary to this conclusion of Atkinson, I show that Atkinson's model has a solution in which the total momentum fails to be conserved in every inertial frame of reference. This result, combined with Atkinson's, demonstrates that both classical and relativistic mechanics allow the energy and momentum of a system of infinitely many components to fail to be conserved in every inertial frame of reference.
NASA Astrophysics Data System (ADS)
Rollins, Christopher; Barbot, Sylvain; Avouac, Jean-Philippe
2015-05-01
Due to its location on a transtensional section of the Pacific-North American plate boundary, the Salton Trough is a region featuring large strike-slip earthquakes within a regime of shallow asthenosphere, high heat flow, and complex faulting, and so postseismic deformation there may feature enhanced viscoelastic relaxation and afterslip that is particularly detectable at the surface. The 2010 El Mayor-Cucapah earthquake was the largest shock in the Salton Trough since 1892 and occurred close to the US-Mexico border, and so the postseismic deformation recorded by the continuous GPS network of southern California provides an opportunity to study the rheology of this region. Three-year postseismic transients extracted from GPS displacement time-series show four key features: (1) 1-2 cm of cumulative uplift in the Imperial Valley and 1 cm of subsidence in the Peninsular Ranges, (2) relatively large cumulative horizontal displacements 150 km from the rupture in the Peninsular Ranges, (3) rapidly decaying horizontal displacement rates in the first few months after the earthquake in the Imperial Valley, and (4) sustained horizontal velocities, following the rapid early motions, that were still visibly ongoing 3 years after the earthquake. Kinematic inversions show that the cumulative 3-year postseismic displacement field can be well fit by afterslip on and below the coseismic rupture, though these solutions require afterslip with a total moment equivalent to at least a earthquake and higher slip magnitudes than those predicted by coseismic stress changes. Forward modeling shows that stress-driven afterslip and viscoelastic relaxation in various configurations within the lithosphere can reproduce the early and later horizontal velocities in the Imperial Valley, while Newtonian viscoelastic relaxation in the asthenosphere can reproduce the uplift in the Imperial Valley and the subsidence and large westward displacements in the Peninsular Ranges. We present two forward
Evolution of wealth in a non-conservative economy driven by local Nash equilibria
Degond, Pierre; Liu, Jian-Guo; Ringhofer, Christian
2014-01-01
We develop a model for the evolution of wealth in a non-conservative economic environment, extending a theory developed in Degond et al. (2014 J. Stat. Phys. 154, 751–780 (doi:10.1007/s10955-013-0888-4)). The model considers a system of rational agents interacting in a game-theoretical framework. This evolution drives the dynamics of the agents in both wealth and economic configuration variables. The cost function is chosen to represent a risk-averse strategy of each agent. That is, the agent is more likely to interact with the market, the more predictable the market, and therefore the smaller its individual risk. This yields a kinetic equation for an effective single particle agent density with a Nash equilibrium serving as the local thermodynamic equilibrium. We consider a regime of scale separation where the large-scale dynamics is given by a hydrodynamic closure with this local equilibrium. A class of generalized collision invariants is developed to overcome the difficulty of the non-conservative property in the hydrodynamic closure derivation of the large-scale dynamics for the evolution of wealth distribution. The result is a system of gas dynamics-type equations for the density and average wealth of the agents on large scales. We recover the inverse Gamma distribution, which has been previously considered in the literature, as a local equilibrium for particular choices of the cost function. PMID:25288808
Stability of earthquake clustering models: Criticality and branching ratios
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Werner, Maximilian J.; Harte, David S.
2013-12-01
We study the stability conditions of a class of branching processes prominent in the analysis and modeling of seismicity. This class includes the epidemic-type aftershock sequence (ETAS) model as a special case, but more generally comprises models in which the magnitude distribution of direct offspring depends on the magnitude of the progenitor, such as the branching aftershock sequence (BASS) model and another recently proposed branching model based on a dynamic scaling hypothesis. These stability conditions are closely related to the concepts of the criticality parameter and the branching ratio. The criticality parameter summarizes the asymptotic behavior of the population after sufficiently many generations, determined by the maximum eigenvalue of the transition equations. The branching ratio is defined by the proportion of triggered events in all the events. Based on the results for the generalized case, we show that the branching ratio of the ETAS model is identical to its criticality parameter because its magnitude density is separable from the full intensity. More generally, however, these two values differ and thus place separate conditions on model stability. As an illustration of the difference and of the importance of the stability conditions, we employ a version of the BASS model, reformulated to ensure the possibility of stationarity. In addition, we analyze the magnitude distributions of successive generations of the BASS model via analytical and numerical methods, and find that the compound density differs substantially from a Gutenberg-Richter distribution, unless the process is essentially subcritical (branching ratio less than 1) or the magnitude dependence between the parent event and the direct offspring is weak.
NASA Astrophysics Data System (ADS)
Juanes, R.; Jha, B.; Hager, B. H.; Shaw, J. H.; Plesch, A.; Astiz, L.; Dieterich, J. H.; Frohlich, C.
2016-07-01
Seismicity induced by fluid injection and withdrawal has emerged as a central element of the scientific discussion around subsurface technologies that tap into water and energy resources. Here we present the application of coupled flow-geomechanics simulation technology to the post mortem analysis of a sequence of damaging earthquakes (Mw = 6.0 and 5.8) in May 2012 near the Cavone oil field, in northern Italy. This sequence raised the question of whether these earthquakes might have been triggered by activities due to oil and gas production. Our analysis strongly suggests that the combined effects of fluid production and injection from the Cavone field were not a driver for the observed seismicity. More generally, our study illustrates that computational modeling of coupled flow and geomechanics permits the integration of geologic, seismotectonic, well log, fluid pressure and flow rate, and geodetic data and provides a promising approach for assessing and managing hazards associated with induced seismicity.
Modeling the pre-earthquake electrostatic effect on the F region ionosphere
NASA Astrophysics Data System (ADS)
Kim, V. P.; Liu, J. Y.; Hegai, V. V.
2012-12-01
This paper presents the results of modeling the ionospheric effect of the seismogenic electrostatic field (SEF) seen at the earth's surface as a perturbation of the vertical atmospheric electrostatic field in the earthquake preparation zone. The SEF distribution at ionospheric altitudes is obtained as an analytical solution of the continuity equation for the electric current density. It is shown that at night, the horizontally large scale SEF can efficiently penetrate into the ionosphere and produce noticeable changes in the horizontal distribution of the F region electron density. The results suggest that the seismogenic electrostatic field could be a possible source for the ionospheric variations observed over Taiwan before the strong Chi Chi earthquake of September 21, 1999.
Finite element modeling of stress in the Nazca plate - Driving forces and plate boundary earthquakes
NASA Technical Reports Server (NTRS)
Richardson, R. M.
1978-01-01
The state of stress within the Nazca plate due to plate driving forces and large plate boundary earthquakes has been analyzed by applying a finite element method using the wave front solution technique to models of the intraplate stress field in a single plate using a refined grid. Although only static elastic models have been explicitly calculated, certain limiting cases of an elastic plate over a viscous asthenosphere were also treated. A state of nearly east-west compression inferred from the source mechanism of thrust earthquakes in the interior of the plate requires ridge pushing forces. The net pulling force on the oceanic plate by the subducted slab has a maximum value comparable to pushing forces. The estimated horizontal deviatoric stress in intraplate regions, based on potential forces associated with the ridge, is on the order of a few hundred bars. The intraplate stress field in the region of the 1960 earthquake may change by a few tens of bars at most once the asthenosphere has relaxed, with changes on the order of one bar occurring at greater distances into the plate. The changes in the intraplate stress field are probably not noticeable unless the lithosphere is near failure.
NASA Technical Reports Server (NTRS)
Rundle, John B.
1988-01-01
The idea that earthquakes represent a fluctuation about the long-term motion of plates is expressed mathematically through the fluctuation hypothesis, under which all physical quantities which pertain to the occurance of earthquakes are required to depend on the difference between the present state of slip on the fault and its long-term average. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state, and that the latter stages are stable for long intervals of time. For long enough faults, the dynamics are evidently chaotic. The methods developed are then used to construct a detailed model for earthquake dynamics in southern California. The result is a set of slip-time histories for all the major faults, which are similar to data obtained by geological trenching studies. Although there is an element of periodicity to the events, the patterns shift, change and evolve with time. Time scales for pattern evolution seem to be of the order of a thousand years for average recurring intervals of about a hundred years.
Packaged Fault Model for Geometric Segmentation of Active Faults Into Earthquake Source Faults
NASA Astrophysics Data System (ADS)
Nakata, T.; Kumamoto, T.
2004-12-01
In Japan, the empirical formula proposed by Matsuda (1975) mainly based on the length of the historical surface fault ruptures and magnitude, is generally applied to estimate the size of future earthquakes from the extent of existing active faults for seismic hazard assessment. Therefore validity of the active fault length and defining individual segment boundaries where propagating ruptures terminate are essential and crucial to the reliability for the accurate assessments. It is, however, not likely for us to clearly identify the behavioral earthquake segments from observation of surface faulting during the historical period, because most of the active faults have longer recurrence intervals than 1000 years in Japan. Besides uncertainties of the datasets obtained mainly from fault trenching studies are quite large for fault grouping/segmentation. This is why new methods or criteria should be applied for active fault grouping/segmentation, and one of the candidates may be geometric criterion of active faults. Matsuda (1990) used _gfive kilometer_h as a critical distance for grouping and separation of neighboring active faults. On the other hand, Nakata and Goto (1998) proposed the geometric criteria such as (1) branching features of active fault traces and (2) characteristic pattern of vertical-slip distribution along the fault traces as tools to predict rupture length of future earthquakes. The branching during the fault rupture propagation is regarded as an effective energy dissipation process and could result in final rupture termination. With respect to the characteristic pattern of vertical-slip distribution, especially with strike-slip components, the up-thrown sides along the faults are, in general, located on the fault blocks in the direction of relative strike-slip. Applying these new geometric criteria to the high-resolution active fault distribution maps, the fault grouping/segmentation could be more practically conducted. We tested this model
M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model
Parsons, Thomas E.
2006-01-01
Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.
NASA Astrophysics Data System (ADS)
Ferreira, A. M.; Vallée, M.; Lentas, K.
2010-12-01
Accurate earthquake point source parameters (e.g. seismic moment, depth and focal mechanism) provide key first-order information for detailed studies of the earthquake source process and for improved seismic and tsunami hazard evaluation. In order to objectively assess the quality of seismic source models, it is important to go beyond classical resolution/misfit checks. In particular, it is desirable to apply sophisticated modeling techniques to quantify uncertainties due to simplified theoretical formulations and/or Earth structure employed to build the source models. Moreover, it is important to verify how well the models explain data not used in their construction for a complete, quantitative assessment of the earthquake source models. In this study we compare the quality of the surface-wave Centroid Moment Tensor (CMT) method with that of the SCARDEC method, which is a new automated body-wave technique for the fast simultaneous determination of the seismic moment, focal mechanism, depth and source time functions of large earthquakes. We focus on the major shallow subduction earthquakes of the last 20 years, for which there are some systematic differences between SCARDEC and CMT source parameters, notably in fault dip angle and moment magnitude. Because the SCARDEC method is based on body-wave deconvolution using ray methods in a 1D Earth model, we test how well SCARDEC source parameters explain long-period seismic data (surface waves and normal modes) compared to the CMT method. We calculate theoretical seismograms using two forward modelling techniques (full ray theory and spectral element method) to simulate the long-period seismic wavefield for the 3D Earth model S20RTS combined with the crust model CRUST2.0, and for two point source models: (i) the SCARDEC model; and (ii) the Global CMT model. We compare the synthetic seismograms with real broadband data from the FDSN for the major subduction earthquakes of the last 20 years. We show that SCARDEC source
Macgregor-Scott, N.; Walter, A.
1988-01-01
Crustal velocity structure for the region near Coalinga, California, has been derived from both earthquake and explosion seismic phase data recorded along a NW-SE seismic-refraction profile on the western flank of the Great Valley east of the Diablo Range. Comparison of the two data sets reveals P-wave phases in common which can be correlated with changes in the velocity structure below the earthquake hypocenters. In addition, the earthquake records reveal secondary phases at station ranges of less than 20 km that could be the result of S- to P-wave conversions at velocity interfaces above the earthquake hypocenters. Two-dimensional ray-trace modeling of the P-wave travel times resulted in a P-wave velocity model for the western flank of the Great Valley comprised of: 1) a 7- to 9-km thick section of sedimentary strata with velocities similar to those found elsewhere in the Great Valley (1.6 to 5.2 km s-1); 2) a middle crust extending to about 14 km depth with velocities comparable to those reported for the Franciscan assemblage in the Diablo Range (5.6 to 5.9 km s-1); and 3) a 13- to 14-km thick lower crust with velocities similar to those reported beneath the Diablo Range and the Great Valley (6.5 to 7.30 km s-1). This lower crust may have been derived from subducted oceanic crust that was thickened by accretionary underplating or crustal shortening. -Authors
NASA Astrophysics Data System (ADS)
Maurer, J.; Segall, P.
2015-12-01
Understanding and predicting earthquake magnitudes from injection-induced seismicity is critically important for estimating hazard due to injection operations. A particular problem has been that the largest event often occurs post shut-in. A rigorous analysis would require modeling all stages of earthquake nucleation, propagation, and arrest, and not just initiation. We present a simple conceptual model for predicting the distribution of earthquake magnitudes during and following injection, building on the analysis of Segall & Lu (2015). The analysis requires several assumptions: (1) the distribution of source dimensions follows a Gutenberg-Richter distribution; (2) in environments where the background ratio of shear to effective normal stress is low, the size of induced events is limited by the volume perturbed by injection (e.g., Shapiro et al., 2013; McGarr, 2014), and (3) the perturbed volume can be approximated by diffusion in a homogeneous medium. Evidence for the second assumption comes from numerical studies that indicate the background ratio of shear to normal stress controls how far an earthquake rupture, once initiated, can grow (Dunham et al., 2011; Schmitt et al., submitted). We derive analytical expressions that give the rate of events of a given magnitude as the product of three terms: the time-dependent rate of nucleations, the probability of nucleating on a source of given size (from the Gutenberg-Richter distribution), and a time-dependent geometrical factor. We verify our results using simulations and demonstrate characteristics observed in real induced sequences, such as time-dependent b-values and the occurrence of the largest event post injection. We compare results to Segall & Lu (2015) as well as example datasets. Future work includes using 2D numerical simulations to test our results and assumptions; in particular, investigating how background shear stress and fault roughness control rupture extent.
Redefining Earthquakes and the Earthquake Machine
ERIC Educational Resources Information Center
Hubenthal, Michael; Braile, Larry; Taber, John
2008-01-01
The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…
Comparing earthquake models for the Corinth rift for Mw>=5.5/6/6.5 (Greece)
NASA Astrophysics Data System (ADS)
Boiselet, Aurélien; Scotti, Oona; Lyon-Caen, Hélène; Ford, Mary; Meyer, NIcolas; Bernard, Pascal
2013-04-01
The Corinth rift (Greece) is identified as a site of major importance for earthquake studies in Europe, producing one of the highest seismic activity and strain in the Euro-Mediterranean region. It is characterized by an asymmetrical structure, with the most active normal faults dipping north and a north-south extension rate measured by GPS increasing from 0.6 mm/year in the eastern part of the rift to 15 mm/year in the western part. Frequent seismic swarms and destructive earthquakes are observed in this area. The Corinth rift Laboratory (CRL, http://crlab.eu) european project investigates fault mechanics, its relationship with earthquakes, fluid flow and the related hazards in the western part of the rift, covering an area about 50 km by 40 km, between the city of Patras to the west and the city of Aigion to the east. As part of this project, within the CRL-SISCOR group, we construct earthquake forecast models (EFM) for M>=5.5/6/6.5 events of the Corinth rift area based on the in-depth seismotectonic studies available for this region. We first present the methodology used to construct the earthquake and fault databases and to quantify the associated uncertainties. We then propose EFM following two approaches: one based on the definition of seimotectonic areas with similar geologic or strain characteristics, the second one based on the definition of fault sources mapped at the surface as well as blind ones. In order to compute the probability of occurrence for M>=5.5/6/6.5 for seismotectonic areas, we analyse two earthquake catalogues available for Greece (National Observatory of Athens, Thessaloniki), apply two declustering methods (Reasenberg and Gardner) to construct a Poissonian earthquake catalogue and test the influence of the minimal magnitude (3.5; 4.0). We compare the impact of maximum magnitude and corner magnitude (Kagan 1997, 2002) estimations. We then apply the Weichert method to estimate the probability of occurrence of M>=5.5/6/6.5 based on
Analogue models of subduction megathrust earthquakes: improving rheology and monitoring technique
NASA Astrophysics Data System (ADS)
Brizzi, Silvia; Corbi, Fabio; Funiciello, Francesca; Moroni, Monica
2015-04-01
Most of the world's great earthquakes (Mw > 8.5, usually known as mega-earthquakes) occur at shallow depths along the subduction thrust fault (STF), i.e., the frictional interface between the subducting and overriding plates. Spatiotemporal occurrences of mega-earthquakes and their governing physics remain ambiguous, as tragically demonstrated by the underestimation of recent megathrust events (i.e., 2011 Tohoku). To help unravel seismic cycle at STF, analogue modelling has become a key-tool. First properly scaled analogue models with realistic geometries (i.e., wedge-shaped) suitable for studying interplate seismicity have been realized using granular elasto-plastic [e.g., Rosenau et al., 2009] and viscoelastic materials [i.e., Corbi et al., 2013]. In particular, viscoelastic laboratory experiments realized with type A gelatin 2.5 wt% simulate, in a simplified yet robust way, the basic physics governing subduction seismic cycle and related rupture process. Despite the strength of this approach, analogue earthquakes are not perfectly comparable to their natural prototype. In this work, we try to improve subduction seismic cycle analogue models by modifying the rheological properties of the analogue material and adopting a new image analysis technique (i.e., PEP - ParticlE and Prediction velocity). We test the influence of lithosphere elasticity by using type A gelatin with greater concentration (i.e., 6 wt%). Results show that gelatin elasticity plays important role in controlling seismogenic behaviour of STF, tuning the mean and the maximum magnitude of analogue earthquakes. In particular, by increasing gelatin elasticity, we observe decreasing mean magnitude, while the maximum magnitude remains the same. Experimental results therefore suggest that lithosphere elasticity could be one of the parameters that tunes seismogenic behaviour of STF. To increase gelatin elasticity also implies improving similarities with their natural prototype in terms of coseismic
NASA Astrophysics Data System (ADS)
Higgins, M.; Weber, J. C.; Robertson, R. E. A.
2014-12-01
We are undertaking a study to better determine the locking characteristics of Lesser Antilles subduction zone using new cGPS data from 20 stations in the Lesser Antilles volcanic islands and outboard islands in the forearc sliver. The new data come from the SRC, IGS and IPGP cGPS networks. Each site we use has a minimum of 3 years of data, and raw site velocities have average uncertainties in the horizontal on order ≤ 2 mm/yr. Eventually, we also hope to incorporate vertical velocities, which have slightly larger uncertainties, into the model along with earthquake slip vectors from the Harvard CMT catalog. We model the cGPS data using Defnode, which performs inverse modeling using the Okada (1985; 1992) method to determine the elastic slip distribution along block (CA plate, NA plate, SA plate, forearc sliver) boundaries. Free parameters are the distribution of locking ratios of the fault representing the subduction zone, and deformation in each block. Our model's weakness is that it is constrained by data from islands with a limited distribution in trying to model a large area. Using the SRC earthquake catalogue, the subduction zone was gridded and we summed the total seismic moment for each grid cell, using only M>3.5 earthquakes with low RMS values. The gridded seismic moment rates were then averaged over the length of time of the complete catalog. Several models were then produced with Defnode and a solution optimization technique that estimated the locking ratio distribution. The models' resulting seismic moment rates are then compared against those gleaned from the SRC catalogue. Previous works have suggested that subducting ridges on the NA plate may indeed be locked, this work also tries to identify the locking ratio of these ridges.
The mass balance of earthquakes and earthquake sequences
NASA Astrophysics Data System (ADS)
Marc, O.; Hovius, N.; Meunier, P.
2016-04-01
Large, compressional earthquakes cause surface uplift as well as widespread mass wasting. Knowledge of their trade-off is fragmentary. Combining a seismologically consistent model of earthquake-triggered landsliding and an analytical solution of coseismic surface displacement, we assess how the mass balance of single earthquakes and earthquake sequences depends on fault size and other geophysical parameters. We find that intermediate size earthquakes (Mw 6-7.3) may cause more erosion than uplift, controlled primarily by seismic source depth and landscape steepness, and less so by fault dip and rake. Such earthquakes can limit topographic growth, but our model indicates that both smaller and larger earthquakes (Mw < 6, Mw > 7.3) systematically cause mountain building. Earthquake sequences with a Gutenberg-Richter distribution have a greater tendency to lead to predominant erosion, than repeating earthquakes of the same magnitude, unless a fault can produce earthquakes with Mw > 8 or more.
Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.
2008-01-01
We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor
2016-01-01
Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647
NASA Astrophysics Data System (ADS)
Suzuki, A.; Ogawa, Y.; Saito, Z.; Ushioda, M.; Ichihara, H.; Ichiki, M.; Mishina, M.
2015-12-01
The 2008 Iwate-Miyagi Nairiku Earthquake (M 7.2) was an unusually large earthquake, which occurred near the volcanic regions. To understand the mechanism of inland earthquakes, it is important to study the structure around the area. Okada et al. (2012) observed aftershocks precisely and estimated the seismic velocity structure. Iinuma et al. (2009) detected coseismic and aseismic slips with GPS observations. Mishina (2009) and Ichihara et al. (2014) conducted 2-D and 3-D MT surveys respectively. However, the MT station distributions of the previous MT surveys were sparse. We carried out denser surveys and showed more precise resistivity structures around the area. We conducted MT surveys at 66 stations (59 stations from October until November in 2012 and 7 stations from October until November in 2014) around the area and estimated 3-D resistivity structures using inversion code of Siripunvaraporn and Egbert (2009) with full impedance tensor as response functions. The result of our final resistivity structures is similar to the one in Ichihara et al. (2014), but is more complex. We found a low resistivity zone to the northeast of Mt. Kurikoma below 3km depth. This anomaly is connected with a low resistivity zone located under Mt. Kurikoma below 10km depth. The locations of aseismic and co-seismic slips in Iinuma et al. (2009) correspond to the locations of low resistivity and high resistivity zones in our model respectively. This may represent that low resistivity zones are brittle and high resistivity zones are ductile.
6.9 Sikkim Earthquake and Modeling of Ground Motions to Determine Causative Fault
NASA Astrophysics Data System (ADS)
Chopra, Sumer; Sharma, Jyoti; Sutar, Anup; Bansal, B. K.
2014-07-01
In this study, source parameters of the September 18, 2011 M w 6.9, Sikkim earthquake were determined using acceleration records. These parameters were then used to generate strong motion at a number of sites using the stochastic finite fault modeling technique to constrain the causative fault plane for this earthquake. The average values of corner frequency, seismic moment, stress drop and source radius were 0.12 Hz, 3.07 × 1026 dyne-cm, 115 bars and 9.68 km, respectively. The fault plane solution showed strike-slip movement with two nodal planes oriented along two prominent lineaments in the region, the NE-oriented Kanchendzonga and NW-oriented Tista lineaments. The ground motions were estimated considering both the nodal planes as causative faults and the results in terms of the peak ground accelerations (PGA) and Fourier spectra were then compared with the actual recordings. We found that the NW-SE striking nodal plane along the Tista lineament may have been the causative fault for the Sikkim earthquake, as PGA estimates are comparable with the observed recordings. We also observed that the Fourier spectrum is not a good parameter in deciding the causative fault plane.
Dell’Acqua, F.; Gamba, P.; Jaiswal, K.
2012-01-01
This paper discusses spatial aspects of the global exposure dataset and mapping needs for earthquake risk assessment. We discuss this in the context of development of a Global Exposure Database for the Global Earthquake Model (GED4GEM), which requires compilation of a multi-scale inventory of assets at risk, for example, buildings, populations, and economic exposure. After defining the relevant spatial and geographic scales of interest, different procedures are proposed to disaggregate coarse-resolution data, to map them, and if necessary to infer missing data by using proxies. We discuss the advantages and limitations of these methodologies and detail the potentials of utilizing remote-sensing data. The latter is used especially to homogenize an existing coarser dataset and, where possible, replace it with detailed information extracted from remote sensing using the built-up indicators for different environments. Present research shows that the spatial aspects of earthquake risk computation are tightly connected with the availability of datasets of the resolution necessary for producing sufficiently detailed exposure. The global exposure database designed by the GED4GEM project is able to manage datasets and queries of multiple spatial scales.
NASA Technical Reports Server (NTRS)
Pulinets, S.; Ouzounov, D.
2010-01-01
The paper presents a conception of complex multidisciplinary approach to the problem of clarification the nature of short-term earthquake precursors observed in atmosphere, atmospheric electricity and in ionosphere and magnetosphere. Our approach is based on the most fundamental principles of tectonics giving understanding that earthquake is an ultimate result of relative movement of tectonic plates and blocks of different sizes. Different kind of gases: methane, helium, hydrogen, and carbon dioxide leaking from the crust can serve as carrier gases for radon including underwater seismically active faults. Radon action on atmospheric gases is similar to the cosmic rays effects in upper layers of atmosphere: it is the air ionization and formation by ions the nucleus of water condensation. Condensation of water vapor is accompanied by the latent heat exhalation is the main cause for observing atmospheric thermal anomalies. Formation of large ion clusters changes the conductivity of boundary layer of atmosphere and parameters of the global electric circuit over the active tectonic faults. Variations of atmospheric electricity are the main source of ionospheric anomalies over seismically active areas. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model can explain most of these events as a synergy between different ground surface, atmosphere and ionosphere processes and anomalous variations which are usually named as short-term earthquake precursors. A newly developed approach of Interdisciplinary Space-Terrestrial Framework (ISTF) can provide also a verification of these precursory processes in seismically active regions. The main outcome of this paper is the unified concept for systematic validation of different types of earthquake precursors united by physical basis in one common theory.
Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben
NASA Astrophysics Data System (ADS)
Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.
2016-05-01
Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using state-of-the-art techniques covers both the location uncertainty and the location inaccuracy - or bias - problematics. It consists, first, in creating a 3D synthetic seismic cloud of events in the reservoir and calculating the seismic travel times to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1D velocity model uncertainties, a local 3D perturbation of the velocity and a 3D geo-structural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1D velocity model used for the synthetic earthquake relocation. The 3D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a cumulative
Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben
NASA Astrophysics Data System (ADS)
Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.
2016-08-01
Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy-or bias-problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a
Scaling and critcal phenomena in a cellular automaton slider-block model for earthquakes
Rundle, J.B. ); Klein, W. )
1993-07-01
The dynamics of a general class of two-dimensional cellular automaton slider-block models of earthquake faults is studied as a function of the failure rules that determine slip and the nature of the failure threshold. Scaling properties of clusters of failed sites imply the existence of a mean-field spinodal line in systems with spatially random failure thresholds, whereas spatially uniform failure thresholds produce behavior reminiscent of self-organized critical behavior. This model can describe several classes of faults, ranging from those that only exhibit creep to those that produce large events. 16 refs., 4 figs.
Analysis of 2012 M8.6 Indian Ocean earthquake coseismic slip model based on GPS data
NASA Astrophysics Data System (ADS)
Maulida, Putra; Meilano, Irwan; Gunawan, Endra; Efendi, Joni
2016-05-01
The CGPS (Continuous Global Position System) data of Sumatran GPS Array (CGPS) and Indonesian Geospatial Agency (BIG) in Sumatra are processed to estimate the best fit coseismic model of 2012 M8.6 Indian Ocean earthquake. For GPS data processing, we used the GPS Analysis at Massachusetts Institute of Technology (GAMIT) 10.5 software and Global Kalman Filter (GLOBK) to generate position time series of each GPS stations and estimate the coseismic offset due to the Earthquake. The result from GPS processing indicates that the earthquake caused displacement northeast ward up to 25 cm in northern Sumatra. Results also show subsidence at the northern Sumatran while the central part of Sumatra show northwest direction displacement, but we cannot find whether the subsidence or the uplift signal associated to the earthquake due to the vertical data quality. Based on the GPS coseismic data, we evaluate the coseismic slip model of Indian Ocean Earthquake produced by previous study [1], [2], [3]. We calculated coseismic displacement using half-space with earthquake slip model input and compare it with the displacement produced form GPS data.
Dynamic Models of Potential Earthquakes in the San Gorgonio Pass, CA
NASA Astrophysics Data System (ADS)
Tarnowski, J. M.; Oglesby, D. D.
2012-12-01
We use numerical modeling to investigate the likelihood of a through-going earthquake along the San Andreas fault system in the San Gorgonio Pass (SGP). The SGP is a structurally complex area of Southern California often referred to as a "pinch-point" along the fault system, with several non-vertical and non-coplanar segments. It may or may not be a geometrical barrier that can slow or stop earthquake rupture propagation. The likelihood of through-going rupture in the SGP affects the maximum earthquake size in Southern California as well as the intensity and distribution of ground motion, with implications for seismic hazard. We use the finite element code FaultMod (Barall, 2009) to observe differences in rupture propagation and ground motion based on different input parameters in a simplified fault geometry of the SGP region. This region includes the nearly perpendicular intersection of the right-lateral San Bernardino strand and the San Gorgonio Pass Thrust near Millard Canyon. Models that include the San Bernardino, Garnet Hill, and Coachella Valley fault strands show that near-fault ground motion patterns are heterogeneous with pronounced asymmetry across the fault strands. Ground motion distribution farther from the strands varies with the hypocenter location. In models with faults close to failure, the presence of well-defined Mach cones suggest that non-planar fault strands may not inhibit supershear rupture nor render Mach cones incoherent. The models presented here are the early stages of a series of test cases to develop more realistic models of rupture in the SGP that incorporate the complexity of fault geometry and stress distribution in the region.
NASA Astrophysics Data System (ADS)
Field, E. H.; Arrowsmith, R.; Biasi, G. P.; Bird, P.; Dawson, T. E.; Felzer, K. R.; Jackson, D. D.; Johnson, K. M.; Jordan, T. H.; Madugo, C. M.; Michael, A. J.; Milner, K. R.; Page, M. T.; Parsons, T.; Powers, P.; Shaw, B. E.; Thatcher, W. R.; Weldon, R. J.; Zeng, Y.
2013-12-01
We present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), where the primary achievements have been to relax fault segmentation and include multi-fault ruptures, both limitations of UCERF2. The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level 'grand inversion' that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (e.g., magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded due to lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (e.g., constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 over-prediction of M6.5-7 earthquake rates, and also includes types of multi-fault ruptures seen in nature. While UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site
NASA Astrophysics Data System (ADS)
Dabaghi, Mayssa Nabil
A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as the natural variability of motions for a given earthquake and site scenario. By fitting the model to a database of recorded near-fault ground motions with known earthquake source and site characteristics, empirical "observations" of the model parameters are obtained. These observations are used to develop predictive equations for the model parameters in terms of a small number of earthquake source and site characteristics. Functional forms for the predictive equations that are consistent with seismological theory are employed. A site-based simulation procedure that employs the proposed stochastic model and predictive equations is developed to generate synthetic near-fault ground motions at a site. The procedure is formulated in terms of information about the earthquake design scenario that is normally available to a design engineer. Not all near-fault ground motions contain a forward directivity pulse, even when the conditions for such a pulse are favorable. The proposed procedure produces pulselike and non-pulselike motions in the same proportions as they naturally occur among recorded near-fault ground motions for a given design scenario. The proposed models and simulation procedure are validated by several means. Synthetic ground motion time series with fitted parameter values are compared with the corresponding recorded motions. The proposed empirical predictive relations are compared to similar relations available in the literature. The overall simulation procedure is
Generalization of Faraday's Law to include nonconservative spin forces.
Barnes, S E; Maekawa, S
2007-06-15
The usual Faraday's Law E=-dPhi/dt determines an electromotive force E which accounts only for forces resulting from the charge of electrons. In ferromagnetic materials, in general, there exist nonconservative spin forces which also contribute to E. These might be included in Faraday's Law if the magnetic flux Phi is replaced by [Planck's constant/(-e)]gamma, where gamma is a Berry phase suitably averaged over the electron spin direction. These contributions to E represent the requirements of energy conservation in itinerant ferromagnets with time dependent order parameters. PMID:17677979
Electrodynamic model of atmospheric and ionospheric processes on the eve of an earthquake
NASA Astrophysics Data System (ADS)
Sorokin, V. M.; Ruzhin, Yu. Ya.
2015-09-01
Electric field generation and its accompanying phenomena in the atmosphere-ionosphere system have been intensively studied in recent years. This paper considers the results of these studies, which have served as the physical basis for the model of lithosphere-ionosphere coupling. According to our model, the intensive processes in the lower atmosphere and lithosphere have an electrodynamic effect on the ionospheric plasma. The model was used to conduct theoretical studies of plasma and electromagnetic effects accompanying the generation of conduction current in the global circuit. It has been shown that the electrodynamic model of the influence of seismic and meteorological processes on cosmic plasma can serve as a physical basis for a satellite system to monitor earthquake precursors and the catastrophic phase of typhoon development. The model makes it possible to couple the satellite data of electromagnetic and plasma measurements with electrophysical and meteorological characteristics of the lower atmosphere at the stage of earthquake preparation and typhoon initiation. The model suggests that the numerous effects in the cosmic plasma have a single source: a change in the conduction current flowing in the atmosphere-ionosphere circuit.
NASA Astrophysics Data System (ADS)
Alexandrakis, C.; Löberich, E.; Kieslich, A.; Calo, M.; Vavrycuk, V.; Buske, S.
2015-12-01
Earthquake swarms, fluid migration and gas springs are indications of the ongoing geodynamic processes within the West Bohemia seismic zone located at the Czech-German border. The possible relationship between the fluids, gas and seismicity is of particular interest and has motivated numerous past, ongoing and future studies, including a multidisciplinary monitoring proposal through the International Continental Scientific Drilling Program (ICDP). The most seismically active area within the West Bohemia seismic zone is located at the Czech town Nový Kostel. The Nový Kostel zone experiences frequent swarms of several hundreds to thousands of earthquakes over a period of weeks to several months. The seismicity is always located in the same area and depth range (~5-15 km), however the activated fault segments and planes differ. For example, the 2008 swarm activated faults along the southern end of the seismic zone, the 2011 swarm activated the northern segment, and the recent 2014 swarm activated the middle of the seismic zone. This indicates changes to the local stress field, and may relate to fluid migration and/or the complicated tectonic situation. The West Bohemia Seismic Network (WEBNET) is ideally located for studying the Nový Kostel swarm area and provides good azimuthal coverage. Here, we use the high quality P- and S-wave arrival picks recorded by WEBNET to calculate swarm-dependent velocity models for the 2008 and 2011 swarms, and an averaged (swarm independent) model using earthquakes recorded between 1991 and 2011. To this end, we use double-difference tomography to calculate P- and S-wave velocity models. The models are compared and examined in terms of swarm-dependent velocities and structures. Since the P-to-S velocity ratio is particularly sensitive to the presence of pore fluids, we derive ratio models directly from the inverted P- and S-wave models in order to investigate the potential influence of fluids on the seismicity. Finally, clustering
Physical modeling of volcanic tremor as repeating stick-slip earthquakes
NASA Astrophysics Data System (ADS)
Dmitrieva, K.; Dunham, E. M.
2011-12-01
One proposed explanation for volcanic tremor is the occurrence of repeating earthquakes, leading to a quasi-periodic signal on seismograms. A constant time interval between events leads, through the Dirac comb effect, to spectral peaks in the frequency domain, with the fundamental frequency given by the reciprocal of the interevent time. Gliding harmonic tremor, in which the frequencies of the spectral peaks vary with time, was observed before the 2009 eruption of Redoubt Volcano in Alaska [A. Hotovec, S. Prejean, J. Vidale and J. Gomberg, J. Volc. Geotherm. Res., submitted]. The fundamental frequency grew from 1 to over 20 Hz over the few minutes prior to the explosions, with seismicity then ceasing for 10 s before each explosion. We investigate the viability of the repeating earthquakes theory, using well-established physical models of earthquake cycles on frictional faults. Hotovec et al. locate the repeating earthquakes near the conduit about 1 km depth below the vent. They estimate a source dimension 10-100 m, assuming typical earthquake magnitude scaling laws. We analyze the fault mechanics with a spring-slider model with stiffness κ μ/R, where μ is the shear modulus and R is the fault dimension. The fault obeys a rate-and-state friction law. In response to a constant shear stressing rate α, the fault can either slide at constant velocity V=α/κ or undergo stick-slip oscillations. We perform a stability analysis on this system to determine the critical values of the parameters governing stick-slip and stable-sliding regimes. At high stressing rates it is necessary to consider inertial effects, captured here through the radiation damping approximation. Radiation damping stabilizes the system at sufficiently high α, namely α>αcr=κ2Lq/η, where q=σ(b-a)/(κL)-1, η=μ/c is the radiation damping parameter, c is the shear wave velocity, L is the state evolution distance, σ is the normal stress, and a and b are the usual rate and state friction
Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model
Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J., II; Zeng, Yuehua; Working Group on CA Earthquake Probabilities
2013-01-01
In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Donnellan, A.; Parker, J. W.; Stough, T. M.; Burl, M. C.; Pierce, M.; Wang, J.; Ma, Y.; Rundle, J. B.; yoder, M. R.; Bawden, G. W.
2012-12-01
Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision-making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. Geodetic imaging data, including from inteferometric synthetic aperture radar (InSAR) and GPS, have a rich scientific heritage for use in earthquake research. Survey grade GPS was developed in the 1980s and the first InSAR image of an earthquake was produced for the 1992 Landers event. As more of these types of data have become increasingly available they have also shown great utility for providing key information for disaster response. Work has been done to translate these data into useful and actionable information for decision makers in the event of an earthquake disaster. In addition to observed data, modeling tools provide essential preliminary estimates while data are still being collected and/or processed, which can be refined as data products become available. Now, with more data and better models, we are able apply these to responders who need easy tools and routinely produced data products. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). E-DECIDER has taken advantage of the legacy of Earth science data, including MODIS, Landsat, SCIGN, PBO, UAVSAR, and modeling tools such as the ones developed by QuakeSim, in order to deliver successful decision support products for earthquake disaster response. The project has
NASA Astrophysics Data System (ADS)
Iwaki, Asako; Maeda, Takahiro; Morikawa, Nobuyuki; Aoi, Shin; Fujiwara, Hiroyuki
2016-06-01
In this study, a method for simulating the ground motion of megathrust earthquakes at periods of approximately 2 s and longer was validated by using the characterized source model combined with multi-scale spatial heterogeneity. Source models for the M W 8.3, 2003 Tokachi-oki earthquake were constructed, and ground motion simulations were conducted to test their performance. First, a characterized source model was generated based on a source model obtained from waveform inversion analysis. Then, multi-scale heterogeneity was added to the spatial distribution of several source parameters to yield a heterogeneous source model. An investigation of the Fourier spectra and 5 % damped velocity response spectra of the simulated and observed ground motions demonstrated that adding multi-scale heterogeneity to the spatial distributions of the slip, rupture velocity, and rake angle of the characterized source model is an effective method for constructing a source model that explains the ground motion at periods of 2-20 s. It was also revealed how the complexity of the parameters affects the resulting ground motion. The complexity of the rupture velocity had the largest influence among the three parameters.
NASA Astrophysics Data System (ADS)
Chiu, J.; Chen, H.; Kim, K.; Pujol, J.; Chiu, S.; Withers, M.
2003-12-01
Traditional local earthquake location using a horizontally layered homogeneous velocity model is always limited in its resolution and reliability due to the existence of frequently overlooked 3- dimensional complexity of the real earth. Simultaneous earthquake relocation during a traditional 3-D seismic tomography has only applied to a limited set of selected earthquakes that more than 50% of earthquakes in a catalog are basically ignored. A new earthquake location program has been developed to locate every local earthquake using the best available 3-D Vp and Vs model for a region. Many modern seismic networks have provided excellent spatial coverage of seismic stations to record high-resolution earthquake data to allow the determination of high-resolution 3-D Vp and Vs velocity model for the region. Once Vp and Vs information for all 3-D grid points are available, travel time from each grid point to all seismic stations can be calculated using any available 3-D ray tracing techniques and be stored in computer files for later usage. Travel times from a trial hypocenter to the recording stations can be interpolated simply from those of the adjacent 8 grid points available in computer files without the very time consuming 3-D ray tracing. Iterations continue until the hypocenter adjustments are less than the given criteria and the travel time residual, or the difference between the observed and the calculated travel times, is a minimum. Therefore, any earthquake, no matter how small or how big it is, will be efficiently and reliably located using the 3-D velocity model. This new location program has been applied to the New Madrid seismic zone of the central USA and in various seismic zones in Taiwan region. Preliminary results in these two regions indicate that earthquake hypocenters can be reliably relocated in spite of the very significant lateral structural variations. This location program can also be applied in routine earthquake location for any seismic network
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Inverse and Forward Modeling of The 2014 Iquique Earthquake with Run-up Data
NASA Astrophysics Data System (ADS)
Fuentes, M.
2015-12-01
The April 1, 2014 Mw 8.2 Iquique earthquake excited a moderate tsunami which turned on the national alert of tsunami threat. This earthquake was located in the well-known seismic gap in northern Chile which had a high seismic potential (~ Mw 9.0) after the two main large historic events of 1868 and 1877. Nonetheless, studies of the seismic source performed with seismic data inversions suggest that the event exhibited a main patch located around 19.8° S at 40 km of depth with a seismic moment equivalent to Mw = 8.2. Thus, a large seismic deficit remains in the gap being capable to release an event of Mw = 8.8-8.9. To understand the importance of the tsunami threat in this zone, a seismic source modeling of the Iquique Earthquake is performed. A new approach based on stochastic k2 seismic sources is presented. A set of those sources is generated and for each one, a full numerical tsunami model is performed in order to obtain the run-up heights along the coastline. The results are compared with the available field run-up measurements and with the tide gauges that registered the signal. The comparison is not uniform; it penalizes more when the discrepancies are larger close to the peak run-up location. This criterion allows to identify the best seismic source from the set of scenarios that explains better the observations from a statistical point of view. By the other hand, a L2 norm minimization is used to invert the seismic source by comparing the peak nearshore tsunami amplitude (PNTA) with the run-up observations. This method searches in a space of solutions the best seismic configuration by retrieving the Green's function coefficients in order to explain the field measurements. The results obtained confirm that a concentrated down-dip patch slip adequately models the run-up data.
Helicity non-conserving form factor of the proton
Voutier, E.; Furget, C.; Knox, S.
1994-04-01
The study of the hadron structure in the high Q{sup 2} range contributes to the understanding of the mechanisms responsible for the confinement of quarks and gluons. Among the numerous experimental candidates sensitive to these mechanisms, the helicity non-conserving form factor of the proton is a privileged observable since it is controlled by non-perturbative effects. The authors investigate here the feasibility of high Q{sup 2} measurements of this form factor by means of the recoil polarization method in the context of the CEBAF 8 GeV facility. For that purpose, they discuss the development of a high energy proton polarimeter, based on the H({rvec p},pp) elastic scattering, to be placed at the focal plane of a new hadron spectrometer. It is shown that this experimental method significantly improves the knowledge of the helicity non-conserving form factor of the proton up to 10 GeV{sup 2}/c{sup 2}.
Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment
NASA Technical Reports Server (NTRS)
Rebbapragada, Umaa; Oommen, Thomas
2011-01-01
On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.
Stephenson, William J.
2007-01-01
INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.
Focal Depth of the WenChuan Earthquake Aftershocks from modeling of Seismic Depth Phases
NASA Astrophysics Data System (ADS)
Luo, Y.; Zeng, X.; Chong, J.; Ni, S.; Chen, Y.
2008-12-01
After the 05/12/2008 great WenChuan earthquake in Sichuan Province of China, tens of thousands earthquakes occurred with hundreds of them stronger than M4. Those aftershocks provide valuable information about seismotectonics and rupture processes for the mainshock, particularly accurate spatial distribution of aftershocks is very informational for determining rupture fault planes. However focal depth can not be well resolved just with first arrivals recorded by relatively sparse network in Sichuan Province, therefore 3D seismicity distribution is difficult to obtain though horizontal location can be located with accuracy of 5km. Instead local/regional depth phases such as sPmP, sPn, sPL and teleseismic pP,sP are very sensitive to depth, and be readily modeled to determine depth with accuracy of 2km. With reference 1D velocity structure resolved from receiver functions and seismic refraction studies, local/regional depth phases such as sPmP, sPn and sPL are identified by comparing observed waveform with synthetic seismograms by generalized ray theory and reflectivity methods. For teleseismic depth phases well observed for M5.5 and stronger events, we developed an algorithm in inverting both depth and focal mechanism from P and SH waveforms. Also we employed the Cut and Paste (CAP) method developed by Zhao and Helmberger in modeling mechanism and depth with local waveforms, which constrains depth by fitting Pnl waveforms and the relative weight between surface wave and Pnl. After modeling all the depth phases for hundreds of events , we find that most of the M4 earthquakes occur between 2-18km depth, with aftershocks depth ranging 4-12km in the southern half of Longmenshan fault while aftershocks in the northern half featuring large depth range up to 18km. Therefore seismogenic zone in the northern segment is deeper as compared to the southern segment. All the aftershocks occur in upper crust, given that the Moho is deeper than 40km, or even 60km west of the
ERIC Educational Resources Information Center
Svec, Michael
1996-01-01
Describes methods to access current earthquake information from the National Earthquake Information Center. Enables students to build genuine learning experiences using real data from earthquakes that have recently occurred. (JRH)
McCrory, Patricia A.; Blair, J. Luke; Oppenheimer, David H.; Walter, Stephen R.
2004-01-01
We present an updated model of the Juan de Fuca slab beneath southern British Columbia, Washington, Oregon, and northern California, and use this model to separate earthquakes occurring above and below the slab surface. The model is based on depth contours previously published by Fluck and others (1997). Our model attempts to rectify a number of shortcomings in the original model and update it with new work. The most significant improvements include (1) a gridded slab surface in geo-referenced (ArcGIS) format, (2) continuation of the slab surface to its full northern and southern edges, (3) extension of the slab surface from 50-km depth down to 110-km beneath the Cascade arc volcanoes, and (4) revision of the slab shape based on new seismic-reflection and seismic-refraction studies. We have used this surface to sort earthquakes and present some general observations and interpretations of seismicity patterns revealed by our analysis. For example, deep earthquakes within the Juan de Fuca Plate beneath western Washington define a linear trend that may mark a tear within the subducting plate Also earthquakes associated with the northern stands of the San Andreas Fault abruptly terminate at the inferred southern boundary of the Juan de Fuca slab. In addition, we provide files of earthquakes above and below the slab surface and a 3-D animation or fly-through showing a shaded-relief map with plate boundaries, the slab surface, and hypocenters for use as a visualization tool.
Geist, Eric L.; Titov, Vasily V.; Arcas, Diego; Pollitz, Fred F.; Bilek, Susan L.
2007-01-01
Results from different tsunami forecasting and hazard assessment models are compared with observed tsunami wave heights from the 26 December 2004 Indian Ocean tsunami. Forecast models are based on initial earthquake information and are used to estimate tsunami wave heights during propagation. An empirical forecast relationship based only on seismic moment provides a close estimate to the observed mean regional and maximum local tsunami runup heights for the 2004 Indian Ocean tsunami but underestimates mean regional tsunami heights at azimuths in line with the tsunami beaming pattern (e.g., Sri Lanka, Thailand). Standard forecast models developed from subfault discretization of earthquake rupture, in which deep- ocean sea level observations are used to constrain slip, are also tested. Forecast models of this type use tsunami time-series measurements at points in the deep ocean. As a proxy for the 2004 Indian Ocean tsunami, a transect of deep-ocean tsunami amplitudes recorded by satellite altimetry is used to constrain slip along four subfaults of the M >9 Sumatra–Andaman earthquake. This proxy model performs well in comparison to observed tsunami wave heights, travel times, and inundation patterns at Banda Aceh. Hypothetical tsunami hazard assessments models based on end- member estimates for average slip and rupture length (Mw 9.0–9.3) are compared with tsunami observations. Using average slip (low end member) and rupture length (high end member) (Mw 9.14) consistent with many seismic, geodetic, and tsunami inversions adequately estimates tsunami runup in most regions, except the extreme runup in the western Aceh province. The high slip that occurred in the southern part of the rupture zone linked to runup in this location is a larger fluctuation than expected from standard stochastic slip models. In addition, excess moment release (∼9%) deduced from geodetic studies in comparison to seismic moment estimates may generate additional tsunami energy, if the
NASA Astrophysics Data System (ADS)
Munnangi, P.
2015-12-01
The Bay Area is one of the world's most vulnerable places to earthquakes, and being ready is vital to survival. The purpose of this study was to determine the distribution of places affected in a 7.0 Hayward Earthquake and the effectiveness of earthquake early warning (EEW) in this scenario. We manipulated three variables: the location of the epicenter, the station placement, and algorithm used for early warning. To compute the blind zone and warning times, we calculated the P and S wave velocities by using data from the Northern California Earthquake Catalog and the radius of the blind zone using appropriate statistical models. We came up with a linear regression model directly relating warning time and distance from the epicenter. We used Google Earth to plot three hypothetical epicenters on the Hayward Fault and determine which establishments would be affected. By varying the locations, the blind zones and warning times changed. As the radius from the epicenter increased, the warning times also increased. The intensity decreased as the distance from the epicenter grew. We determined which cities were most vulnerable. We came up with a list of cities and their predicted warning times in this hypothetical scenario. For example, for the epicenter in northern Hayward, the cities at most risk were San Pablo, Richmond, and surrounding cities, while the cities at least risk were Gilroy, Modesto, Lincoln, and other cities within that radius. To find optimal station placement, we chose two cities with stations placed variable distances apart from each other. There was more variability in scattered stations than dense stations, suggesting stations placed closer together are more effective since they provide precise warnings. We compared the algorithms ElarmS, which is currently used in the California Integrated Seismic Network (CISN) and Onsite, which is a single-sensor approach that uses one to two stations, by calculating the blind zone and warning times for each
NASA Astrophysics Data System (ADS)
Sirakoulis, G. Ch.
2009-04-01
Greece is referred as the most active seismically region of Europe and one of the top active lands in the world. However, the complexity of the available seismicity information calls for the development of ever more powerful and more reliable computational tools to tackle complex problems associated with proper interpretation of the obtained geophysical information. Cellular Automata (CAs) were showed to be a promising model for earthquake modelling, because certain aspects of the earthquake dynamics, function and evolution can be simulated using several mathematical tools introduced through the use of CAs. In this study, a three-dimensional (3-d) CA dynamic system constituted of cell-charges and taking into account the recorded focal depth, able to simulate real earthquake activity is presented. The whole simulation process of the earthquake activity is evolved with an LC analogue CA model in correspondence to well known earthquake models. The parameterisation of the CA model in terms of potential threshold and geophysical area characteristics is succeeded by applying a standard genetic algorithm (GA) which would extend the model ability to study various hypotheses concerning the seismicity of the region under consideration. As a result, the proposed model optimizes the simulation results, which are compared with the Gutenberg - Richter (GR) scaling relations derived by the use of real data, as well as it expands its validity in broader and different regions of increased hazard. Finally, the hardware implementation of the proposed model is also examined. The FPGA realisation of the proposed 3-d CA based earthquake simulation model will exhibit distinct features that facilitate its utilisation, meaning low-cost, high-speed, compactness and portability. The development and manufacture of the dedicated processor aims at its effective incorporation into an efficient seismographic system. As a result, the dedicated processor could realize the first stage of a
Modeling the Dynamic Rupture Process of the 1987 Superstition Hills Earthquake
NASA Astrophysics Data System (ADS)
Liu, Q.; Archuleta, R. J.
2012-12-01
The Mw 6.6 Superstition Hill (SH) earthquake (Nov. 24, 1987) in southern California intrigues us in terms of its rupture dynamics. Kinematic source inversion results imply a complex event, which consisted of 3 subevents (Frankel & Wennerberg, 1989; Wald et al., 1990). All of the subevents seemed to initiate at the northwestern end of the SH fault where SH fault intersects with a conjugate Elmore Ranch (ER) fault. Moreover a Mw 6.2 earthquake and its aftershock sequence occurred on the Elmore Ranch fault within the 12 hours before the SH earthquake. Existing studies show that the distribution of seismicity correlates with the subsurface geology (Fuis et al. 1984). Based on comparison of the extent of each subevent with the fault trace, the fault geometry might make significant contribution to the dynamic process. The San-Jacinto (SJ) fault system certainly complicates the local stress field near SH fault by introducing several pairs of conjugate faults at various length scales in the region. A possible scenario of the Mw 6.6 event would be a sequence of seismic events on the conjugate ER fault perturbed the already complicated initial stress field on the non-planar SH fault and triggered the SH event. Using a finite element approach (Ma & Liu, 2006), we try to create a reasonable initial stress condition for the Mw 6.6 event and model its rupture process, incorporating complex fault geometry, 3D velocity structure and varying frictional properties (velocity weakening/strengthening) both along strike and dip. Recent results on SH fault's creeping behavior (e.g., Wei et al. 2009) also impose constraints on its frictional and rheological material properties, which is essential in the dynamic rupture modeling as well as the nucleation phase.
NASA Astrophysics Data System (ADS)
Alvarado, Patricia; Ramos, Victor A.
2011-04-01
We investigate the seismic properties of modern crustal seismicity in the northwestern Sierras Pampeanas of the Andean retroarc region of Argentina. We modelled the complete regional seismic broadband waveforms of two crustal earthquakes that occurred in the Sierra de Velasco on 28 May 2002 and in the Sierra de Ambato on 7 September 2004. For each earthquake we obtained the seismic moment tensor inversion (SMTI) and tested for its focal depth. Our results indicate mainly thrust focal mechanism solutions of magnitudes Mw 5.8 and 6.2 and focal depths of 10 and 8 km, respectively. These results represent the larger seismicity and shallower focal depths in the last 100 years in this region. The SMTI 2002 and 2004 solutions are consistent with previous determinations for crustal seismicity in this region that also used seismic waveform modelling. Taken together, the results for crustal seismicity of magnitudes ≥5.0 in the last 30 years are consistent with an average P-axis horizontally oriented by an azimuth of 125° and T-axis orientation of azimuth 241° and plunge 58°. This modern crustal seismicity and the historical earthquakes are associated with two active reverse faulting systems of opposite vergences bounding the eastern margin of the Sierra de Velasco in the south and the southwestern margin of the Sierra de Ambato in the north. Strain recorded by focal mechanisms of the larger seismicity is very consistent over this region and is in good agreement with neotectonic activity during the last 11,000 years by Costa (2008) and Casa et al. (in press); this shows that the dominant deformation in this part of the Sierras Pampeanas is mainly controlled by contraction. Seismic deformation related to propagation of thrusts and long-lived shear zones of this area permit to disregard previous proposals, which suggested an extensional or sinistral regime for the geomorphic evolution since Pleistocene.
NASA Astrophysics Data System (ADS)
Gonzalez, P. J.; Tiampo, K. F.; Palano, M.; Cannavò, F.; Fernandez, J.
2011-12-01
The Alhama de Murcia Fault (AMF) is a compound multisegmented right-lateral to reverse fault system. The AMF is one the longest faults in the Eastern Betics Shear zone (Southeastern Spain). In the last decades its seismogenic potential has been evaluated and earthquake maximum magnitudes were forecast based on paleoseismic and dating data. On May 11th, 2011 a moderate (Mw 5.1) earthquake shook the region, causing 9 casualties and severe damages in Lorca city (Murcia region). The reported location of the aftershocks sequence did not follow any particular trend; furthermore in-situ geology surveys did not identify any fault slip related ground deformation. To contribute to a better seismic hazard assessment, we need to locate and, if possible, characterize the fault-slip distribution that generated the earthquake. In this work, we detected small but significant ground deformation in the epicentral area by using geodetic (GPS and satellite radar interferometry) data. Geodetic data was processed by using a stack of differential radar interferometry (corrected for a known subsidence contribution and estimating their error budget), daily GPS estimated coordinates and high-rate 1-Hz GPS data. We jointly inverted the detected static coseismic displacements (a GPS station and two ENVISAT interferograms from different tracks) for the fault plane geometry parameters by using a rectangular dislocation model embedded in a homogeneous elastic half-space. The best-fitting fault plane follows closely the geologically derived AMF geometry (NE-SW strike trend and dipping ~60-70o to NW). Later, the obtained model geometry was extended and divided into patches to allow for a detailed analysis of the fault slip distribution pattern. Slip distribution indicates that slip occurred in a single patch with reverse and right-lateral motion (with peak fault slip magnitude of ~9 cm). However, the modelling results also indicate that the fault slip was shallower along the centre and southwest
NASA Astrophysics Data System (ADS)
Duputel, Z.; Jiang, J.; Jolivet, R.; Simons, M.; Rivera, L.; Ampuero, J.-P.; Riel, B.; Owen, S. E.; Moore, A. W.; Samsonov, S. V.; Ortega Culaciati, F.; Minson, S. E.
2015-10-01
The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.
2016-01-01
The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.
Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.
Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles
2016-02-01
In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. PMID:26249655
NASA Astrophysics Data System (ADS)
Miller, S. A.
2002-12-01
Evidence from a range of geological and geophysical observations in a variety of tectonic settings show; a) earthquakes induce large-scale changes in permeability over short time scales, b) parts of the lower crust contains large regions of interconnected, lithostatically pressured fluids, c) earthquakes trigger large-scale fluid flow, and d) fluid flow is episodic. This evidence strongly supports the fault-valve model for earthquakes introduced by Sibson, and describes the earthquake process as a coupled system between the development of shear stress from tectonic loading and the weakening effect of increased pore pressure in a mostly sealed fault zone. Fracturing during an earthquake through this sealed zone creates highly permeable pathways that switch the fault from a seal to a channel, allowing connectivity throughout the fault plane to trapped fluids below, and connectivity to the hydrostatically pressured country rock. Numerical models of these processes show pore pressure changes can be very much larger than shear stress changes associated with Coulomb stress transfer. Models also show that pore pressure variations allow slip complexity to easily develop during single rupture events and an overal evolution of complex slip along faults. Quantifying the interseismic destruction of permeable pathways, and co-seismic creation of new pathways should be an important focus of future research because friction is understood for the most part, whereas the hydraulics of fault zones are not. This talk summarizes the evidence for, and modeling results of, the complex dynamics resulting from the fault-valve hypothesis.
Steady state, near-source models of the Parkfield, Imperial Valley, and Mexicali Valley Earthquakes
NASA Astrophysics Data System (ADS)
Mendez, A. J.; Luco, J. E.
1990-01-01
Some of the gross characteristics of the rupture processes for the 1966 Parkfield, 1979 Imperial Valley, and 1980 Mexicali Valley earthquakes are determined by waveform inversion of near-source data employing a steady state dislocation model in a layered half-space. The forward model involves a piecewise-linear rupture front moving with a constant horizontal rupture velocity on a fault of infinite length and finite width. The inferred shapes for the rupture front and for the distribution of slip as a function of depth are consistent with previous results obtained by use of more general models. The results obtained show that a strong velocity pulse observed in the nearsource region can be modeled as the passage of the rupture front phase and that the supershear propagation of the rupture front in the sedimentary layers of the medium provides a mechanism for the generation of the observed large amplitudes.
NASA Astrophysics Data System (ADS)
Ioki, Kei; Tanioka, Yuichiro
2016-01-01
Paleotsunami researches revealed that a great earthquake occurred off eastern Hokkaido, Japan and generated a large tsunami in the 17th century. Tsunami deposits from this event have been found at far inland from the Pacific coast in eastern Hokkaido. Previous study estimated the fault model of the 17th century great earthquake by comparing locations of lowland tsunami deposits and computed tsunami inundation areas. Tsunami deposits were also traced at high cliff near the coast as high as 18 m above the sea level. Recent paleotsunami study also traced tsunami deposits at other high cliffs along the Pacific coast. The fault model estimated from previous study cannot explain the tsunami deposit data at high cliffs near the coast. In this study, we estimated the fault model of the 17th century great earthquake to explain both lowland widespread tsunami deposit areas and tsunami deposit data at high cliffs near the coast. We found that distributions of lowland tsunami deposits were mainly explained by wide rupture area at the plate interface in Tokachi-Oki segment and Nemuro-Oki segment. Tsunami deposits at high cliff near the coast were mainly explained by very large slip of 25 m at the shallow part of the plate interface near the trench in those segments. The total seismic moment of the 17th century great earthquake was calculated to be 1.7 ×1022 Nm (Mw 8.8). The 2011 great Tohoku earthquake ruptured large area off Tohoku and very large slip amount was found at the shallow part of the plate interface near the trench. The 17th century great earthquake had the same characteristics as the 2011 great Tohoku earthquake.
NASA Technical Reports Server (NTRS)
Reches, Ze'ev; Schubert, Gerald; Anderson, Charles
1994-01-01
We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude
Modeling Recent Large Earthquakes Using the 3-D Global Wave Field
NASA Astrophysics Data System (ADS)
Hjörleifsdóttir, V.; Kanamori, H.; Tromp, J.
2003-04-01
We use the spectral-element method (SEM) to accurately compute waveforms at periods of 40 s and longer for three recent large earthquakes using 3D Earth models and finite source models. The M_w~7.6, Jan~26, 2001, Bhuj, India event had a small rupture area and is well modeled at long periods with a point source. We use this event as a calibration event to investigate the effects of 3-D Earth models on the waveforms. The M_w~7.9, Nov~11, 2001, Kunlun, China, event exhibits a large directivity (an asymmetry in the radiation pattern) even at periods longer than 200~s. We used the source time function determined by Kikuchi and Yamanaka (2001) and the overall pattern of slip distribution determined by Lin et al. to guide the wave-form modeling. The large directivity is consistent with a long fault, at least 300 km, and an average rupture speed of 3±0.3~km/s. The directivity at long periods is not sensitive to variations in the rupture speed along strike as long as the average rupture speed is constant. Thus, local variations in rupture speed cannot be ruled out. The rupture speed is a key parameter for estimating the fracture energy of earthquakes. The M_w~8.1, March~25, 1998, event near the Balleny Islands on the Antarctic Plate exhibits large directivity in long period surface waves, similar to the Kunlun event. Many slip models have been obtained from body waves for this earthquake (Kuge et al. (1999), Nettles et al. (1999), Antolik et al. (2000), Henry et al. (2000) and Tsuboi et al. (2000)). We used the slip model from Henry et al. to compute SEM waveforms for this event. The synthetic waveforms show a good fit to the data at periods from 40-200~s, but the amplitude and directivity at longer periods are significantly smaller than observed. Henry et al. suggest that this event comprised two subevents with one triggering the other at a distance of 100 km. To explain the observed directivity however, a significant amount of slip is required between the two subevents
Improving Earthquake-Explosion Discrimination using Attenuation Models of the Crust and Upper Mantle
Pasyanos, M E; Walter, W R; Matzel, E M; Rodgers, A J; Ford, S R; Gok, R; Sweeney, J J
2009-07-06
In the past year, we have made significant progress on developing and calibrating methodologies to improve earthquake-explosion discrimination using high-frequency regional P/S amplitude ratios. Closely-spaced earthquakes and explosions generally discriminate easily using this method, as demonstrated by recordings of explosions from test sites around the world. In relatively simple geophysical regions such as the continental parts of the Yellow Sea and Korean Peninsula (YSKP) we have successfully used a 1-D Magnitude and Distance Amplitude Correction methodology (1-D MDAC) to extend the regional P/S technique over large areas. However in tectonically complex regions such as the Middle East, or the mixed oceanic-continental paths for the YSKP the lateral variations in amplitudes are not well predicted by 1-D corrections and 1-D MDAC P/S discrimination over broad areas can perform poorly. We have developed a new technique to map 2-D attenuation structure in the crust and upper mantle. We retain the MDAC source model and geometrical spreading formulation and use the amplitudes of the four primary regional phases (Pn, Pg, Sn, Lg), to develop a simultaneous multi-phase approach to determine the P-wave and S-wave attenuation of the lithosphere. The methodology allows solving for attenuation structure in different depth layers. Here we show results for the P and S-wave attenuation in crust and upper mantle layers. When applied to the Middle East, we find variations in the attenuation quality factor Q that are consistent with the complex tectonics of the region. For example, provinces along the tectonically-active Tethys collision zone (e.g. Turkish Plateau, Zagros) have high attenuation in both the crust and upper mantle, while the stable outlying regions like the Indian Shield generally have low attenuation. In the Arabian Shield, however, we find that the low attenuation in this Precambrian crust is underlain by a high-attenuation upper mantle similar to the nearby Red
NASA Astrophysics Data System (ADS)
Belmont, Patrick; Stout, Justin
2013-04-01
Fine sediment is routed through landscapes and channel networks in a highly unsteady and non-uniform manner, potentially experiencing deposition and re-suspension many times during transport from source to sink. Developing a better understanding of sediment routing at the landscape scale is an intriguing challenge from a modeling perspective because it requires consideration of a multitude of processes that interact and vary in space and time. From an applied perspective, an improved understanding of sediment routing is essential for predicting how conservation and restoration practices within a watershed will influence water quality, to support land and water management decisions. Two key uncertainties in predicting sediment routing at the landscape scale are 1) determining the proportion of suspended sediment that is derived from terrestrial (soil) erosion versus channel (bank) erosion, and 2) constraining the proportion of sediment that is temporarily stored and re-suspended within the channel-floodplain complex. Sediment fingerprinting that utilizes a suite of conservative and non-conservative geochemical tracers associated with suspended sediment can provide insight regarding both of these key uncertainties. Here we present a model that tracks suspended sediment with associated conservative and non-conservative geochemical tracers. The model assumes that particle residence times are described by a bimodal distribution wherein some fraction of sediment is transported through the system in a relatively short time (< 1 year) and the remainder experiences temporary storage (of variable duration) within the channel-floodplain complex. We use the model to explore the downstream evolution of non-conservative tracers under equilibrium conditions (i.e., exchange between the channel and floodplain is allowed, but no net change in channel-floodplain storage can occur) to illustrate how the process of channel-floodplain storage and re-suspension can potentially bias
Crustal deformation, the earthquake cycle, and models of viscoelastic flow in the asthenosphere
NASA Technical Reports Server (NTRS)
Cohen, S. C.; Kramer, M. J.
1983-01-01
The crustal deformation patterns associated with the earthquake cycle can depend strongly on the rheological properties of subcrustal material. Substantial deviations from the simple patterns for a uniformly elastic earth are expected when viscoelastic flow of subcrustal material is considered. The detailed description of the deformation pattern and in particular the surface displacements, displacement rates, strains, and strain rates depend on the structure and geometry of the material near the seismogenic zone. The origin of some of these differences are resolved by analyzing several different linear viscoelastic models with a common finite element computational technique. The models involve strike-slip faulting and include a thin channel asthenosphere model, a model with a varying thickness lithosphere, and a model with a viscoelastic inclusion below the brittle slip plane. The calculations reveal that the surface deformation pattern is most sensitive to the rheology of the material that lies below the slip plane in a volume whose extent is a few times the fault depth. If this material is viscoelastic, the surface deformation pattern resembles that of an elastic layer lying over a viscoelastic half-space. When the thickness or breath of the viscoelastic material is less than a few times the fault depth, then the surface deformation pattern is altered and geodetic measurements are potentially useful for studying the details of subsurface geometry and structure. Distinguishing among the various models is best accomplished by making geodetic measurements not only near the fault but out to distances equal to several times the fault depth. This is where the model differences are greatest; these differences will be most readily detected shortly after an earthquake when viscoelastic effects are most pronounced.
Crustal deformation, the earthquake cycle, and models of viscoelastic flow in the asthenosphere
NASA Technical Reports Server (NTRS)
Cohen, S. C.; Kramer, M. J.
1984-01-01
The crustal deformation patterns associated with the earthquake cycle can depend strongly on the rheological properties of subcrustal material. Substantial deviations from the simple patterns for a uniformly elastic earth are expected when viscoelastic flow of subcrustal material is considered. The detailed description of the deformation pattern and in particular the surface displacements, displacement rates, strains, and strain rates depend on the structure and geometry of the material near the seismogenic zone. The origin of some of these differences are resolved by analyzing several different linear viscoelastic models with a common finite element computational technique. The models involve strike-slip faulting and include a thin channel asthenosphere model, a model with a varying thickness lithosphere, and a model with a viscoelastic inclusion below the brittle slip plane. The calculations reveal that the surface deformation pattern is most sensitive to the rheology of the material that lies below the slip plane in a volume whose extent is a few times the fault depth. If this material is viscoelastic, the surface deformation pattern resembles that of an elastic layer lying over a viscoelastic half-space. When the thickness or breath of the viscoelastic material is less than a few times the fault depth, then the surface deformation pattern is altered and geodetic measurements are potentially useful for studying the details of subsurface geometry and structure. Distinguishing among the various models is best accomplished by making geodetic measurements not only near the fault but out to distances equal to several times the fault depth. This is where the model differences are greatest; these differences will be most readily detected shortly after an earthquake when viscoelastic effects are most pronounced.
Geodetic measurements and kinematic modeling of the 2014 Iquique-Pisagua earthquake
NASA Astrophysics Data System (ADS)
Moreno, Marcos; Bedford, Jonathan; Li, Shaoyang; Bartsch, Mitja; Schurr, Bernd; Oncken, Onno; Klotz, Jürgen; Baez, Juan Carlos; Gonzalez, Gabriel
2015-04-01
The Northern portion of the Chilean margin is considered to be a large and longstanding seismic gap based on the magnitude and time of the last great earthquake (Mw=8.8 in 1877). The central fraction of the gap was affected by the 2014 Iquique-Pisagua earthquake (Mw=8.1), which was preceded by an unusual series of foreshocks and transient deformation. The Integrated Plate Boundary Observatory Chile (IPOC) has extensively monitored the seismic gap with various geophysical and geodetic techniques, providing an excellent temporal and spatial data coverage to analyze the kinematics of the plate interface leading up to the mainshock with unprecedented resolution. We use a viscoelastic Finite-Element Model to investigate the subduction zone mechanisms that are responsible for the observed GPS deformation field during the interseismic, coseismic and early postseismic periods. Furthermore, we separate the relative contributions of aseismic and seismic slip to the transient deformation leading up to and following the mainshock. Our analyses of the foreshocks and continuous-GPS transient signals indicate that seismic slip dominates over aseismic slip, and that slow slip was not a factor in the build up to the Mw=8.1 mainshock. Hence, the observed transient signals before the Iquique-Pisagua event can be explained by deformation due to foreshock seismicity, which was triggered after a Mw=6.7 event in a splay fault. High coseismic slip concentrated on a previously highly locked area that exhibited low amount of seismicity before the event. Foreshocks gradually occupied the center of the locked patch decreasing the mechanical strength of the plate contact. The first two months of aseismic postseismic deformation shows cumulative displacements up to 10 cm around the rupture area. The early postseismic afterslip only accounts for about 20 % of the coseismic seismic moment. We conclude that the foreshock activity may have decreased the effective friction on the locked patch
Geodynamic background of the 2008 Wenchuan earthquake based on 3D visco-elastic numerical modelling
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhu, Bojing; Yang, Xiaolin; Shi, Yaolin
2016-03-01
The 2008 Wenchuan earthquake (Mw7.9) occurred in the Longmen Shan fault zone. The stress change and crustal deformation during the accumulation period is computed using 3D finite element modelling assuming visco-elastic rheology. Our results support that the eastward movement of the Tibetan Plateau resulting from the India-Eurasia collision is obstructed at the Longmen Shan fault zone by the strong Yangtze craton. In response, the Tibetan ductile crust thickens and accumulates at the contact between the Tibetan Plateau and the Sichuan Basin. This process implies a strong uplift with the rate of about 1.8 mm/a of the upper crust and induces a stress concentration nearly at the bottom of the Longmen Shan fault zone. We believe that the stress concentration in the Longmen Shan fault zone provides a very important geodynamic background of the 2008 Wenchuan earthquake. Using numerical experiments we find that the key factor controlling this stress concentration process is the large viscosity contrast in the middle and lower crusts between the Tibetan Plateau and the Sichuan Basin. The results show that large viscosity contrast in the middle and lower crusts accelerates the stress concentration in the Longmen Shan fault zone. Fast moving lower crustal flow accelerates this stress accumulation process. During the inter-seismic period, spatially the maximum stress accumulation rate of the eastern margin of the Tibetan Plateau is located nearly at the bottom of the brittle upper crust of the Longmen Shan fault zone. The spatial distribution of the stress accumulation along the strike of the Longmen Shan fault zone is as follows: the normal stress decreases while the shear stress increases from southwest to northeast along the Longmen Shan fault zone. This stress distribution explains the thrust motion in the SW and strike-slip motion in the NE during the 2008 Wenchuan earthquake.
NASA Astrophysics Data System (ADS)
Matsuzawa, T.; Shibazaki, B.; Obara, K.; Hirose, H.
2014-12-01
We numerically simulate slow slip events (SSEs) along the Eurasian-Philippine sea plate boundary in southwestern Japan, to examine the behavior of SSEs in the seismic cycles of megathrust earthquakes within a single model. In our previous study (Matsuzawa et al., 2013), long- and short-term SSEs were reproduced in the Shikoku region, considering the distribution of tremor and the configuration of subducting plate. However, the variation in a seismic cycle was not discussed, because calculated duration is short and modeled region is not sufficiently large to simulate seismic cycles. In this study, we model SSEs and megathrust earthquakes along the subduction zone from the Shikoku to the Tokai region in southwestern Japan. In our numerical model, the rate- and state-dependent friction law (RS-law) with cut-off velocities is adopted. We assume low effective normal stress and negative (a-b) value in the RS-law at the long- and short-term SSE region. We model the configuration of plate interface by triangular elements based on Baba et al. (2006), Shiomi et al. (2008), and Ide et al. (2010). Our numerical model reproduces recurrences of long- and short-term SSEs, and the segments of short-term SSEs. The recurrence intervals of short-term SSEs slightly decrease at the late stage of a seismic cycle, reflecting the increase of long-term averaged slip rate in the short-term SSE region as found in a flat plate model (Matsuzawa et al., 2010). This decrease is more clearly found in the Kii region than those in the Shikoku region. Perhaps, this difference may be attributed to the width between the short-term SSE region and the locked region of megathrust earthquakes, as the stress disturbance from transient SSEs, which occur between the locked region and the short-term SSE region (e.g. Matsuzawa et al., 2010, 2013), seems to be relatively small and infrequent due to the narrow width in the Kii region. In addition, as the plate configuration is relatively flat in the Kii region
Equivalent strike-slip earthquake cycles in half-space and lithosphere-asthenosphere earth models
Savage, J.C.
1990-01-01
By virtue of the images used in the dislocation solution, the deformation at the free surface produced throughout the earthquake cycle by slippage on a long strike-slip fault in an Earth model consisting of an elastic plate (lithosphere) overlying a viscoelastic half-space (asthenosphere) can be duplicated by prescribed slip on a vertical fault embedded in an elastic half-space. Inversion of 1973-1988 geodetic measurements of deformation across the segment of the San Andreas fault in the Transverse Ranges north of Los Angeles for the half-space equivalent slip distribution suggests no significant slip on the fault above 30 km and a uniform slip rate of 36 mm/yr below 30 km. One equivalent lithosphere-asthenosphere model would have a 30-km thick lithosphere and an asthenosphere relaxation time greater than 33 years, but other models are possible. -from Author
NASA Astrophysics Data System (ADS)
Yaghmaei-Sabegh, Saman
2015-10-01
This paper presents the development of new and simple empirical models for frequency content prediction of ground-motion records to resolve the assumed limitations on the useable magnitude range of previous studies. Three period values are used in the analysis for describing the frequency content of earthquake ground-motions named as the average spectral period ( T avg), the mean period ( T m), and the smoothed spectral predominant period ( T 0). The proposed models could predict these scalar indicators as function of magnitude, closest site-to-source distance and local site condition. Three site classes as rock, stiff soil, and soft soil has been considered in the analysis. The results of the proposed relationships have been compared with those of other published models. It has been found that the resulting regression equations can be used to predict scalar frequency content estimators over a wide range of magnitudes including magnitudes below 5.5.
Strong ground motions of the 2009 L'Aquila earthquake: modeling and scenario simulations
NASA Astrophysics Data System (ADS)
Gallovič, F.; Ameri, G.; Pacor, F.
2012-04-01
On April 6, 2009 a Mw 6.3 earthquake struck the L'Aquila city, one of the largest urban centers in the Abruzzo region (Central Italy), causing a large number of casualties and damage in the town and surrounding villages. The earthquake has been recorded by several digital stations of the Italian Strong-Motion Network. The collected records represent a unique dataset in Italy in terms of number and quality of records, azimuthal coverage and presence of near-fault recordings. Soon after the earthquake the damage in the epicentral area was also assessed providing macroseismic intensity estimates, in MCS scale, for 314 localities (I ≥5). Despite the moderate magnitude of the L'Aquila earthquake, the strong-motion and macroseismic data in the vicinity of the fault depict a large variability of the observed shaking and damage. In this study we present broadband (0.1 - 10 Hz) ground motion simulations of the 2009 L'Aquila earthquake to be used for engineering purposes in the region. We utilize Hybrid Integral-Composite (HIC, Gallovič and Brokešová, 2007) approach based on a k-square kinematic rupture model, combining low-frequency coherent and high-frequency incoherent source radiation and providing omega-squared source spectral decay. We first model the recorded seismograms in order to calibrate source parameters and to assess the capabilities of the broadband simulation model. To this end, position and slip amount of the two main asperities, the largest asperity time delay and the rupture velocity distribution on the fault is constrained, based on the low-frequency slip inversion result. Synthetic Green's functions are calculated in a 1D-layered crustal model including 1D soil profiles to account for site-specific response (where available). The goodness-of-fit is evaluated in time (peak values and duration) and frequency domains (elastic and inelastic response spectra) and shows a remarkable agreement between observed and simulated data at most of the stations
Isospin nonconserving interaction in the T =1 analogue states of the mass-70 region
NASA Astrophysics Data System (ADS)
Kaneko, K.; Sun, Y.; Mizusaki, T.; Tazaki, S.
2014-03-01
Mirror energy differences (MEDs) and triplet energy differences (TEDs) in the T =1 analogue states are important probes of isospin-symmetry breaking. Inspired by the recent spectroscopic data of 66Se, we investigate these quantities for A =66-78 nuclei with large-scale shell-model calculations. For the first time, we find clear evidence suggesting that the isospin nonconserving (INC) nuclear force has a significant effect for the upper fp shell region. Detailed analysis shows that, in addition to the INC force, the electromagnetic spin-orbit interaction plays an important role for the large, negative MED in A =66 and 70 and the multipole Coulomb term contributes to the negative TED in all the T =1 triplet nuclei. The INC force and its strength needed to reproduce the experimental data are compared with those from the G-matrix calculation using the modern charge-dependent nucleon-nucleon forces.
The FrPNC experiment at TRIUMF: Atomic parity non-conservation in francium
NASA Astrophysics Data System (ADS)
Aubin, S.; Gomez, E.; Behr, J. A.; Pearson, M. R.; Sheng, D.; Zhang, J.; Collister, R.; Melconian, D.; Flambaum, V. V.; Sprouse, G. D.; Orozco, L. A.; Gwinner, G.
2012-09-01
The FrPNC collaboration has begun the construction of an on-line laser cooling and trapping apparatus at TRIUMF to measure atomic parity non-conservation (PNC) and the nuclear anapole moment in a string of artificially produced francium isotopes. Atomic PNC experiments provide unique high precision tests of the electroweak sector of the Standard Model at very low energies. Furthermore, precision measurements of spin-dependent atomic PNC can determine nuclear anapole moments and probe the weak force within the nucleus. Francium is an excellent candidate for precision measurements of atomic PNC due to its simple electronic structure and enhanced parity violation: both the optical PNC and anapole moment signals are expected to be over an order of magnitude larger than in cesium.
NASA Astrophysics Data System (ADS)
Kevrekidis, P. G.
2014-01-01
In a recent publication [5 Galley, Phys. Rev. Lett. 110, 174301 (2013), 10.1103/PhysRevLett.110.174301], Galley proposed an initial value problem formulation of Hamilton's principle that enables consideration of dissipative systems. Here we explore this formulation at the level of field theories with infinite degrees of freedom. In particular, we illustrate that it affords a previously unwarranted and appealing as well as broadly relevant possibility, namely, to generalize the popular collective coordinate or variational method to open systems, i.e., nonconservative ones. To showcase the relevance or validity of the method we explore two case examples from the timely area of PT-symmetric variants of field theories, in this case for a sine-Gordon and for a ϕ4 model.
A measurement of parity non-conserving optical rotation in atomic lead
NASA Astrophysics Data System (ADS)
Phipp, S. J.; Edwards, N. H.; Baird, P. E. G.; Nakayama, S.
1996-05-01
We report a measurement of parity non-conserving (PNC) optical rotation in the vicinity of the 1.279 0953-4075/29/9/028/img7 magnetic dipole transition in atomic lead. We obtain a value for the conventional parameter, 0953-4075/29/9/028/img8, with limits on the nuclear spin-dependent contribution set by the anapole constant 0953-4075/29/9/028/img9. The experimental results, when combined with the relevant atomic calculations, lead to a value for the mass of the 0953-4075/29/9/028/img10 boson or, alternatively, place a limit on physics beyond the standard model through the isospin-conserving parameter, S.
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata; Basili, Roberto; Barba, Salvatore
2016-07-01
This study presents a series of self-correcting models that are obtained by integrating information about seismicity and fault sources in Italy. Four versions of the stress release model are analyzed, in which the evolution of the system over time is represented by the level of strain, moment, seismic energy, or energy scaled by the moment. We carry out the analysis on a regional basis by subdividing the study area into eight tectonically coherent regions. In each region, we reconstruct the seismic history and statistically evaluate the completeness of the resulting seismic catalog. Following the Bayesian paradigm, we apply Markov chain Monte Carlo methods to obtain parameter estimates and a measure of their uncertainty expressed by the simulated posterior distribution. The comparison of the four models through the Bayes factor and an information criterion provides evidence (to different degrees depending on the region) in favor of the stress release model based on the energy and the scaled energy. Therefore, among the quantities considered, this turns out to be the measure of the size of an earthquake to use in stress release models. At any instant, the time to the next event turns out to follow a Gompertz distribution, with a shape parameter that depends on time through the value of the conditional intensity at that instant. In light of this result, the issue of forecasting is tackled through both retrospective and prospective approaches. Retrospectively, the forecasting procedure is carried out on the occurrence times of the events recorded in each region, to determine whether the stress release model reproduces the observations used in the estimation procedure. Prospectively, the estimates of the time to the next event are compared with the dates of the earthquakes that occurred after the end of the learning catalog, in the 2003-2012 decade.
On Chinese earthquake history - An attempt to model an incomplete data set by point process analysis
Lee, W.H.K.; Brillinger, D.R.
1979-01-01
Since the 1950s, the Academia Sinica in Peking, People's Republic of China, has carried out extensive research on the Chinese earthquake history. With a historical record dating back some 3000 years, a wealth of information on Chinese earthquakes exists. Despite this monumental undertaking by the Academia Sinica, much work is still necessary to correct the existing earthquake data for historical changes in population, customs, modes of communication, and dynasties. In this paper we report on the status of our investigation of Chinese earthquake history and present some preliminary results. By applying point process analysis of earthquakes in 'Central China', we found suggestions of (1) lower earthquake activity at intervals of about 175 years and 375 years, and (2) higher earthquake activity at an interval of about 300 years. ?? 1979 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input
Non-conservative perturbations of homoclinic snaking scenarios
NASA Astrophysics Data System (ADS)
Knobloch, Jürgen; Vielitz, Martin
2016-01-01
Homoclinic snaking refers to the continuation of homoclinic orbits to an equilibrium E near a heteroclinic cycle connecting E and a periodic orbit P. Typically homoclinic snaking appears in one-parameter families of reversible, conservative systems. Here we discuss perturbations of this scenario which are both non-reversible and non-conservative. We treat this problem analytically in the spirit of the work [3]. The continuation of homoclinic orbits happens with respect to both the original continuation parameter μ and the perturbation parameter λ. The continuation curves are parametrised by the dwelling time L of the homoclinic orbit near P. It turns out that λ (L) tends to zero while the μ vs. L diagram displays isolas or criss-cross snaking curves in a neighbourhood of the original snakes-and-ladder structure. In the course of our studies we adapt both Fenichel coordinates near P and the analysis of Shilnikov problems near P to the present situation.
Rare nonconservative LRP6 mutations are associated with metabolic syndrome.
Singh, Rajvir; Smith, Emily; Fathzadeh, Mohsen; Liu, Wenzhong; Go, Gwang-Woong; Subrahmanyan, Lakshman; Faramarzi, Saeed; McKenna, William; Mani, Arya
2013-09-01
A rare mutation in LRP6 has been shown to underlie autosomal dominant coronary artery disease (CAD) and metabolic syndrome in an Iranian kindred. The prevalence and spectrum of LRP6 mutations in the disease population of the United States is not known. Two hundred white Americans with early onset familial CAD and metabolic syndrome and 2,000 healthy Northern European controls were screened for nonconservative mutations in LRP6. Three novel mutations were identified, which cosegregated with the metabolic traits in the kindreds of the affected subjects and none in the controls. All three mutations reside in the second propeller domain, which plays a critical role in ligand binding. Two of the mutations substituted highly conserved arginines in the second YWTD domain and the third substituted a conserved glycosylation site. The functional characterization of one of the variants showed that it impairs Wnt signaling and acts as a loss of function mutation. PMID:23703864
Generalized Helmholtz Conditions for Non-Conservative Lagrangian Systems
NASA Astrophysics Data System (ADS)
Bucataru, Ioan; Constantinescu, Oana
2015-12-01
In this paper we provide generalized Helmholtz conditions, in terms of a semi-basic 1-form, which characterize when a given system of second order ordinary differential equations is equivalent to the Lagrange equations, for some given arbitrary non-conservative forces. For the particular cases of dissipative or gyroscopic forces, these conditions, when expressed in terms of a multiplier matrix, reduce to those obtained in Mestdag et al. (Differential Geom. Appl. 29(1), 55-72, 2011). When the involved geometric structures are homogeneous with respect to the fibre coordinates, we show how one can further simplify the generalized Helmholtz conditions. We provide examples where the proposed generalized Helmholtz conditions, expressed in terms of a semi-basic 1-form, can be integrated and the corresponding Lagrangian and Lagrange equations can be found.
Rare nonconservative LRP6 mutations are associated with metabolic syndrome
Singh, Rajvir; Smith, Emily; Fathzadeh, Mohsen; Liu, Wenzhong; Go, Gwang-Woong; Subrahmanyan, Lakshman; Faramarzi, Saeed; McKenna, William; Mani, Arya
2013-01-01
A rare mutation in LRP6 has been shown to underlie autosomal dominant coronary artery disease (CAD) and metabolic syndrome in an Iranian kindred. The prevalence and spectrum of LRP6 mutations in the disease population of the United States is not known. Two hundred white Americans with early onset familial CAD and metabolic syndrome and 2000 healthy Northern European controls were screened for nonconservative mutations in LRP6. Three novel mutations were identified, which co-segregated with the metabolic traits in the kindreds of the affected subjects and none in the controls. All three mutations reside in the second propeller domain, which plays a critical role in ligand binding. Two of the mutations substituted highly conserved arginines in the second YWTD domain and the third substituted a conserved glycosylation site. The functional characterization of one of the variants showed that it impairs Wnt signaling and acts as a loss of function mutation. PMID:23703864
Foxall, W.
1992-11-01
Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.
NASA Astrophysics Data System (ADS)
Croissant, Thomas; Lague, Dimitri; Davy, Philippe; Steer, Philippe
2016-04-01
In active mountain ranges, large earthquakes (Mw > 5-6) trigger numerous landslides that impact river dynamics. These landslides bring local and sudden sediment piles that will be eroded and transported along the river network causing downstream changes in river geometry, transport capacity and erosion efficiency. The progressive removal of landslide materials has implications for downstream hazards management and also for understanding landscape dynamics at the timescale of the seismic cycle. The export time of landslide-derived sediments after large-magnitude earthquakes has been studied from suspended load measurements but a full understanding of the total process, including the coupling between sediment transfer and channel geometry change, still remains an issue. Note that the transport of small sediment pulses has been studied in the context of river restoration, but the magnitude of sediment pulses generated by landslides may make the problem different. Here, we study the export of large volumes (>106 m3) of sediments with the 2D hydro-morphodynamic model, Eros. This model uses a new hydrodynamic module that resolves a reduced form of the Saint-Venant equations with a particle method. It is coupled with a sediment transport and lateral and vertical erosion model. Eros accounts for the complex retroactions between sediment transport and fluvial geometry, with a stochastic description of the floods experienced by the river. Moreover, it is able to reproduce several features deemed necessary to study the evacuation of large sediment pulses, such as river regime modification (single-thread to multi-thread), river avulsion and aggradation, floods and bank erosion. Using a synthetic and simple topography we first present how granulometry, landslide volume and geometry, channel slope and flood frequency influence 1) the dominance of pulse advection vs. diffusion during its evacuation, 2) the pulse export time and 3) the remaining volume of sediment in the catchment
Modelling coseismic displacements during the 1997 Umbria-Marche earthquake (central Italy)
NASA Astrophysics Data System (ADS)
Hunstad, Ingrid; Anzidei, Marco; Cocco, Massimo; Baldi, Paolo; Galvani, Alessandro; Pesci, Arianna
1999-11-01
We propose a dislocation model for the two normal faulting earthquakes that struck the central Apennines (Umbria-Marche, Italy) on 1997 September 26 at 00:33 (Mw 5.7) and 09:40 GMT (Mw 6.0). We fit coseismic horizontal and vertical displacements resulting from GPS measurements at several monuments of the IGMI (Istituto Geografico Militare Italiano) by means of a dislocation model in an elastic, homogeneous, isotropic half-space. Our best-fitting model consists of two normal faults whose mechanisms and seismic moments have been taken from CMT solutions; it is consistent with other seismological and geophysical observations. The first fault, which is 6 km long and 7 km wide, ruptured during the 00:33 event with a unilateral rupture towards the SE and an average slip of 27 cm. The second fault is 12 km long and 10 km wide, and ruptured during the 09:40 event with a nearly unilateral rupture towards the NW. Slip distribution on this second fault is non-uniform and is concentrated in its SE portion (maximum slip is 65 cm), where rupture initiated. The 00:33 fault is deeper than the 09:40 one: the top of the first rupture is deeper than 1.7 km the top of the second is 0.6 km deep. In order to interpret the observed epicentral subsidence we have also considered the contributions of two further moderate-magnitude earthquakes that occurred on 1997 October 3 (Mw 5.2) and 6 (Mw 5.4), immediately before the GPS survey, and were located very close to the 09:40 event of September 26. We compare the pattern of vertical displacements resulting from our forward modelling of GPS data with that derived from SAR interferograms: the fit to SAR data is very good, confirming the reliability of the proposed dislocation model.
Solomon Islands 2007 Tsunami Near-Field Modeling and Source Earthquake Deformation
NASA Astrophysics Data System (ADS)
Uslu, B.; Wei, Y.; Fritz, H.; Titov, V.; Chamberlin, C.
2008-12-01
The earthquake of 1 April 2007 left behind momentous footages of crust rupture and tsunami impact along the coastline of Solomon Islands (Fritz and Kalligeris, 2008; Taylor et al., 2008; McAdoo et al., 2008; PARI, 2008), while the undisturbed tsunami signals were also recorded at nearby deep-ocean tsunameters and coastal tide stations. These multi-dimensional measurements provide valuable datasets to tackle the challenging aspects at the tsunami source directly by inversion from tsunameter records in real time (available in a time frame of minutes), and its relationship with the seismic source derived either from the seismometer records (available in a time frame of hours or days) or from the crust rupture measurements (available in a time frame of months or years). The tsunami measurements in the near field, including the complex vertical crust motion and tsunami runup, are particularly critical to help interpreting the tsunami source. This study develops high-resolution inundation models for the Solomon Islands to compute the near-field tsunami impact. Using these models, this research compares the tsunameter-derived tsunami source with the seismic-derived earthquake sources from comprehensive perceptions, including vertical uplift and subsidence, tsunami runup heights and their distributional pattern among the islands, deep-ocean tsunameter measurements, and near- and far-field tide gauge records. The present study stresses the significance of the tsunami magnitude, source location, bathymetry and topography in accurately modeling the generation, propagation and inundation of the tsunami waves. This study highlights the accuracy and efficiency of the tsunameter-derived tsunami source in modeling the near-field tsunami impact. As the high- resolution models developed in this study will become part of NOAA's tsunami forecast system, these results also suggest expanding the system for potential applications in tsunami hazard assessment, search and rescue operations
NASA Astrophysics Data System (ADS)
Powell, C. A.; Vlahovic, G.; Bodin, P.; Horton, S.
2001-12-01
A three-dimensional P wave velocity model has been constructed for the crust in the vicinity of the Mw=7.7 January 26th Bhuj, India earthquake using aftershock data obtained by CERI away teams. Aftershocks were recorded by 8 portable, digital K2 seismographs (the MAEC/ISTAR network) and by a continuously recording Guralp CMG40TD broad-band seismometer. Station spacing is roughly 30 km. The network was in place for 18 days and recorded ground motions from about 2000 aftershocks located within about 100 km of all stations. The 3-D velocity model is based upon an initial subset of 461 earthquakes with 2848 P wave arrivals. The initial 1-D velocity model was determined using VELEST and the 3-D model was determined using the nonlinear travel time tomography method of Benz et al. [1996]. Block size was set at 2 by 2 by 2 km. A 45% reduction in RMS travel time residuals was obtained after 10 iterations holding hypocenters fixed. We imaged velocity anomalies in the range -2 to 4%. Low velocities were found in the upper 6 km and the anomalies follow surface features such as the Rann of Kutch. High velocity features were imaged at depth and are associated with the aftershock hypocenters. High crustal velocities are present at depths exceeding 20 km with the exception of the crust below the Rann of Kutch. The imaged velocity anomaly pattern does not change when different starting models are used and when hypocenters are relocated using P wave arrivals only. The analysis will be extended to an expanded data set of 941 aftershocks.
Inverse kinematic and forward dynamic models of the 2002 Denali fault earthquake, Alaska
Oglesby, D.D.; Dreger, Douglas S.; Harris, R.A.; Ratchkovski, N.; Hansen, R.
2004-01-01
We perform inverse kinematic and forward dynamic models of the M 7.9 2002 Denali fault, Alaska, earthquake to shed light on the rupture process and dynamics of this event, which took place on a geometrically complex fault system in central Alaska. We use a combination of local seismic and Global Positioning System (GPS) data for our kinematic inversion and find that the slip distribution of this event is characterized by three major asperities on the Denali fault. The rupture nucleated on the Susitna Glacier thrust fault, and after a pause, propagated onto the strike-slip Denali fault. Approximately 216 km to the east, the rupture abandoned the Denali fault in favor of the more southwesterly directed Totschunda fault. Three-dimensional dynamic models of this event indicate that the abandonment of the Denali fault for the Totschunda fault can be explained by the Totschunda fault's more favorable orientation with respect to the local stress field. However, a uniform tectonic stress field cannot explain the complex slip pattern in this event. We also find that our dynamic models predict discontinuous rupture from the Denali to Totschunda fault segments. Such discontinuous rupture helps to qualitatively improve our kinematic inverse models. Two principal implications of our study are (1) a combination of inverse and forward modeling can bring insight into earthquake processes that are not possible with either technique alone, and (2) the stress field on geometrically complex fault systems is most likely not due to a uniform tectonic stress field that is resolved onto fault segments of different orientations; rather, other forms of stress heterogeneity must be invoked to explain the observed slip patterns.
Shallow low-velocity zone of the San Jacinto fault from local earthquake waveform modelling
NASA Astrophysics Data System (ADS)
Yang, Hongfeng; Zhu, Lupei
2010-10-01
We developed a method to determine the depth extent of low-velocity zone (LVZ) associated with a fault zone (FZ) using S-wave precursors from local earthquakes. The precursors are diffracted S waves around the edges of LVZ and their relative amplitudes to the direct S waves are sensitive to the LVZ depth. We applied the method to data recorded by three temporary arrays across three branches of the San Jacinto FZ. The FZ dip was constrained by differential traveltimes of P waves between stations at two side of the FZ. Other FZ parameters (width and velocity contrast) were determined by modelling waveforms of direct and FZ-reflected P and S waves. We found that the LVZ of the Buck Ridge fault branch has a width of ~150 m with a 30-40 per cent reduction in Vp and a 50-60 per cent reduction in Vs. The fault dips 70 +/- 5° to southwest and its LVZ extends only to 2 +/- 1 km in depth. The LVZ of the Clark Valley fault branch has a width of ~200 m with 40 per cent reduction in Vp and 50 per cent reduction in Vs. The Coyote Creek branch is nearly vertical and has a LVZ of ~150 m in width and of 25 per cent reduction in Vp and 50 per cent reduction in Vs. The LVZs of these three branches are not centred at the surface fault trace but are located to their northeast, indicating asymmetric damage during earthquakes.
Block model of western US kinematics from inversion of geodetic, fault slip, and earthquake data
NASA Astrophysics Data System (ADS)
McCaffrey, R.
2003-12-01
The active deformation of the southwestern US (30° to 41° N) is represented by a finite number of rotating, elastic spherical caps. Horizontal GPS velocities (1583), fault slip rates (94), and earthquake slip vectors (116) are inverted for block angular velocities, locking on block-bounding faults, and the rotation of individual GPS velocity fields relative to North America. GPS velocities are modeled as a combination of rigid block rotations and elastic strain rates resulting from interactions of adjacent blocks across bounding faults. The resulting Pacific - North America pole is indistinguishable from that of Beavan et al. (2001) and satisfies spreading in the Gulf of California and earthquake slip vectors in addition to GPS. The largest blocks, the Sierra Nevada - Great Valley and the eastern Basin and Range, show internal strain rates, after removing the elastic component, of only a few nanostrain/a, demonstrating long term approximately rigid behavior. Most fault slip data are satisfied except that the San Jacinto fault appears to be significantly faster than inferred from geology while the Coachella and San Bernardino segments of the San Andreas fault are slower, suggesting the San Andreas system is straightening out in Southern California. Vertical axis rotation rates for most blocks are clockwise and in magnitude more like the Pacific than North America. One exception is the eastern Basin and Range (242° E to 248° E) which rotates slowly anticlockwise about a pole offshore Baja.
Thrust-type subduction-zone earthquakes and seamount asperites: A physical model for seismic rupture
Cloos, M. )
1992-07-01
A thrust-type subduction-zone earthquake of M{sub W} 7.6 ruptures an area of {approximately}6,000 km{sup 2}, has a seismic slip of {approximately}1 m, and is nucleated by the rupture of an asperity {approximately}25km across. A model for thrust-type subduction-zone seismicity is proposed in which basaltic seamounts jammed against the base of the overriding plate act as strong asperities that rupture by stick-slip faulting. A M{sub W} 7.6 event would correspond to the near-basal rupture of a {approximately}2-km-tall seamount. The base of the seamount is surrounded by a low shear-strength layer composed of subducting sediment that also deforms between seismic events by distributed strain (viscous flow). Planar faults form in this layer as the seismic rupture propagates out of the seamount at speeds of kilometers per second. The faults in the shear zone are disrupted after the event by aseismic, slow viscous flow of the subducting sediment layer. Consequently, the extent of fault rupture varies for different earthquakes nucleated at the same seamount asperity because new fault surfaces form in the surrounding subducting sediment layer during each fast seismic rupture.
Rodgers, A
2000-12-28
This is an informal report on preliminary efforts to investigate earthquake focal mechanisms and earth structure in the Anatolian (Turkish) Plateau. Seismic velocity structure of the crust and upper mantle and earthquake focal parameters for event in the Anatolian Plateau are estimated from complete regional waveforms. Focal mechanisms, depths and seismic moments of moderately large crustal events are inferred from long-period (40-100 seconds) waveforms and compared with focal parameters derived from global teleseismic data. Using shorter periods (10-100 seconds) we estimate the shear and compressional velocity structure of the crust and uppermost mantle. Results are broadly consistent with previous studies and imply relatively little crustal thickening beneath the central Anatolian Plateau. Crustal thickness is about 35 km in western Anatolia and greater than 40 km in eastern Anatolia, however the long regional paths require considerable averaging and limit resolution. Crustal velocities are lower than typical continental averages, and even lower than typical active orogens. The mantle P-wave velocity was fixed to 7.9 km/s, in accord with tomographic models. A high sub-Moho Poisson's Ratio of 0.29 was required to fit the Sn-Pn differential times. This is suggestive of high sub-Moho temperatures, high shear wave attenuation and possibly partial melt. The combination of relatively thin crust in a region of high topography and high mantle temperatures suggests that the mantle plays a substantial role in maintaining the elevation.
Model for episodic flow of high-pressure water in fault zones before earthquakes
Byerlee, J.
1993-01-01
In this model for the evolution of large crustal faults, water originally from the country rock saturates the porous and permeable fault zone. During shearing, the fault zone compacts and water flows back into the country rock, but the flow is arrested by silicate deposition that forms low permeability seals. The fluid will be confined to seal-bounded fluid compartments of various sizes and porosity that are not hydraulically connected with each other. When the seal between two compartments is ruptured, an electrical streaming potential will be generated by the sudden movement of fluid from the high-pressure compartment to the low-pressure compartment. During an earthquake the width of the fault zone will increase by failure of the geometric irregularities on the fault. This newly created, porous and permeable, wider fault zone will fill with water, and the process described above will be repeated. Thus, the process is episodic with the water moving in and out of the fault zone, and each large earthquake should be preceded by an electrical and/or magnetic signal. -from Author
A multilayer model of time dependent deformation following an earthquake on a strike-slip fault
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1981-01-01
A multilayer model of the Earth to calculate finite element of time dependent deformation and stress following an earthquake on a strike slip fault is discussed. The model involves shear properties of an elastic upper lithosphere, a standard viscoelastic linear solid lower lithosphere, a Maxwell viscoelastic asthenosphere and an elastic mesosphere. Systematic variations of fault and layer depths and comparisons with simpler elastic lithosphere over viscoelastic asthenosphere calculations are analyzed. Both the creep of the lower lithosphere and astenosphere contribute to the postseismic deformation. The magnitude of the deformation is enhanced by a short distance between the bottom of the fault (slip zone) and the top of the creep region but is less sensitive to the thickness of the creeping layer. Postseismic restressing is increased as the lower lithosphere becomes more viscoelastic, but the tendency for the width of the restressed zone to growth with time is retarded.
NASA Astrophysics Data System (ADS)
Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène
2016-04-01
The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).
NASA Astrophysics Data System (ADS)
Piatanesi, Alessio; Tinti, Elisa; Cocco, Massimo; Fukuyama, Eiichi
2004-02-01
We compute the temporal evolution of traction by solving the elasto-dynamic equation and by using the slip velocity history as a boundary condition on the fault plane. We use different source time functions to derive a suite of kinematic source models to image the spatial distribution of dynamic and breakdown stress drop, strength excess and critical slip weakening distance (Dc). Our results show that the source time functions, adopted in kinematic source models, affect the inferred dynamic parameters. The critical slip weakening distance, characterizing the constitutive relation, ranges between 30% and 80% of the total slip. The ratio between Dc and total slip depends on the adopted source time functions and, in these applications, is nearly constant over the fault. We propose that source time functions compatible with earthquake dynamics should be used to infer the traction time history.
Tomographic velocity model for the aftershock region of the 2001 Gujarat, India earthquake
NASA Astrophysics Data System (ADS)
Negishi, H.; Kumar, S.; Mori, J. J.; Sato, T.; Bodin, P.; Rastogi, B.
2002-12-01
A tomographic inversion was applied to the aftershock data collected after the January 26, 2001 Bhuj earthquake (Ms 7.9, Mw 7.7), which occurred on a south dipping (~50 degrees) reverse fault in the state of Gujarat in western India. We used high quality arrivals from 8,374 P and 7,994 S waves of 1404 aftershocks recorded on 27 digital stations from temporary seismic arrays setup by the India-Japan team; NGRI, India; and CERI, Memphis Univ., USA, following the Bhuj main shock. First, we used the Joint Hypocenters Determination Method for obtaining relocated hypocenters and a one-dimensional Vp and Vs velocity model, and then the resultant hypocenters and 1-D velocity model were used as the initial parameters for a 3-D tomographic inversion. The tomography technique is based on a grid-modeling method by Zhao et al. . Vp, Vs and hypocenters are determined simultaneously. We tried to use the Cross-Validation Technique for determining an optimum model in the seismic tomography. This approach has been applied to other tomographic studies to investigate the quantitative fluctuation range of velocity perturbations . Significant variations in the velocity (up to 6%) and Poisson's ratio (up to 8%) are revealed in the aftershock area. It seems that the aftershock distribution corresponds to the boundary between high and low velocity heterogeneities. Small values of Vp/Vs are generally found at depths of 10 to 35 km, i.e. the depth range of aftershock distribution. However, the deeper region below the hypocenter of the mainshock, at depths of 35 to 45 km, is characterized by relatively high values of Vp/Vs and low values of Vs. This anomaly may be due to a weak fractured and fluid filled rock matrix, which might have contributed to triggering this earthquake. This earthquake occurred on a relatively deep and steeply dipping fault with a large stress drop . Theoretically it is difficult to slip steep faults, especially in the lower crust. Our tomographic investigation provides
NASA Astrophysics Data System (ADS)
Harding, D. J.; Miuller, J. R.
2005-12-01
Modeling the kinematics of the 2004 Great Sumatra-Andaman earthquake is limited in the northern two-thirds of the rupture zone by a scarcity of near-rupture geodetic deformation measurements. Precisely repeated Ice, Cloud, and Land Elevation Satellite (ICESat) profiles across the Andaman and Nicobar Islands provide a means to more fully document the spatial pattern of surface vertical displacements and thus better constrain geomechanical modeling of the slip distribution. ICESat profiles that total ~45 km in length cross Car Nicobar, Kamorta, and Katchall in the Nicobar chain. Within the Andamans, the coverage includes ~350 km on North, Central, and South Andaman Islands along two NNE and NNW-trending profiles that provide elevations on both the east and west coasts of the island chain. Two profiles totaling ~80 km in length cross South Sentinel Island, and one profile ~10 km long crosses North Sentinel Island. With an average laser footprint spacing of 175 m, the total coverage provides over 2700 georeferenced surface elevations measurements for each operations period. Laser backscatter waveforms recorded for each footprint enable detection of forest canopy top and underlying ground elevations with decimeter vertical precision. Surface elevation change is determined from elevation profiles, acquired before and after the earthquake, that are repeated with a cross-track separation of less than 100 m by precision pointing of the ICESat spacecraft. Apparent elevation changes associated with cross-track offsets are corrected according to local slopes calculated from multiple post-earthquake repeated profiles. The surface deformation measurements recorded by ICESat are generally consistent with the spatial distribution of uplift predicted by a preliminary slip distribution model. To predict co-seismic surface deformation, we apply a slip distribution, derived from the released energy distribution computed by Ishii et al. (2005), as the displacement discontinuity
NASA Astrophysics Data System (ADS)
Ngo, D.; Huang, Y.; Rosakis, A.; Griffith, W. A.; Pollard, D. D.
2009-12-01
Motivated by the occurrence of high-angle pseudotachylite injection veins along exhumed faults, we use optical experiments and high-speed photography to interpret the origins of tensile fractures that form during dynamic shear rupture in laboratory experiments. Sub-Rayleigh (slower than the Rayleigh wave speed) shear ruptures in Homalite-100 produce damage zones consisting of a periodic array of tensile cracks. These cracks nucleate and grow within cohesive zones behind the tips of shear ruptures that propagate dynamically along interfaces with frictional and cohesive strength. The tensile cracks are produced only along one side of the interface where transient, fault-parallel, tensile stress perturbations are associated with the growing shear rupture tip. We use an analytical, linear velocity weakening, rupture model to examine the local nature of the dynamic stress field in the vicinity of the tip of the main shear rupture which grows along a weak plane (fault) with sub-Rayleigh speed. It is this stress field which is responsible for driving the off-fault mode-I microcracks that grow during the experiments. We show that (1) the orientation of the cracks can be explained by this analytical model; and (2) the cracks can be used to simultaneously constrain the constitutive behavior of the shear rupture tip. In addition, we propose an extension of this model to explain damage structures observed along exhumed faults. Results of this study represent an important bridge between geological observations of structures preserved along exhumed faults, laboratory experiments and theoretical models of earthquake propagation, potentially leading to diagnostic criteria for interpreting velocity, directivity, and static pre-stress state associated with past earthquakes on exhumed faults.
NASA Astrophysics Data System (ADS)
Stein, R. S.
2012-12-01
The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by
NASA Astrophysics Data System (ADS)
Trofimenko, S. V.; Bykov, V. G.; Merkulova, T. V.
2016-07-01
In this paper, we aimed to investigate the statistical distributions of shallow earthquakes with 2 ≤ M ≤ 4, located in 13 rectangular areas (clusters) bounded by 120°E and 144°E along the northern boundary of the Amurian microplate. As a result of our study, the displacement of seismicity maxima has been determined and three recurrent spatial cycles have been observed. The clusters with similar distribution of earthquakes are suggested to alternate being equally spaced at 7.26° (360-420 km). A comparison of investigation results on the structure of seismicity in various segments of the Amurian microplate reveals the identity between the alternation pattern observed for meridional zones of large earthquakes and a distinguished spatial period. The displacement vector for seismicity in the annual cycles is determined, and the correspondence between its E-W direction and the displacement of the fronts of large earthquakes is established. The elaborated model of seismic and deformation processes is considered, in which subsequent activation of clusters of weak earthquakes (2 ≤ M ≤ 4), tending to extend from the Japanese-Sakhalin island arc to the eastern closure of the Baikal rift zone, is initiated by the displacement of the strain wave front.
Landes, François P; Lippiello, E
2016-05-01
The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics. PMID:27300821
NASA Astrophysics Data System (ADS)
Landes, François P.; Lippiello, E.
2016-05-01
The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.
Analysis of self-organized criticality in the Olami-Feder-Christensen model and in real earthquakes
Caruso, F.; Vinciguerra, S.
2007-05-15
We perform an analysis on the dissipative Olami-Feder-Christensen model on a small world topology considering avalanche size differences. We show that when criticality appears, the probability density functions (PDFs) for the avalanche size differences at different times have fat tails with a q-Gaussian shape. This behavior does not depend on the time interval adopted and is found also when considering energy differences between real earthquakes. Such a result can be analytically understood if the sizes (released energies) of the avalanches (earthquakes) have no correlations. Our findings support the hypothesis that a self-organized criticality mechanism with long-range interactions is at the origin of seismic events and indicate that it is not possible to predict the magnitude of the next earthquake knowing those of the previous ones.
NASA Astrophysics Data System (ADS)
Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.
2009-12-01
The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.
Noether theorem for nonholonomic nonconservative mechanical systems in phase space on time scales
NASA Astrophysics Data System (ADS)
Zu, Qi-hang; Zhu, Jian-qing
2016-08-01
The paper focuses on studying the Noether theorem for nonholonomic nonconservative mechanical systems in phase space on time scales. First, the Hamilton equations of nonholonomic nonconservative systems on time scales are established, which is based on the Lagrange equations for nonholonomic systems on time scales. Then, based upon the quasi-invariance of Hamilton action of systems under the infinitesimal transformations with respect to the time and generalized coordinate on time scale, the Noether identity and the conserved quantity of nonholonomic nonconservative systems on time scales are obtained. Finally, an example is presented to illustrate the application of the results.
NASA Astrophysics Data System (ADS)
Moradi, M.; Delavar, M. R.; Moradi, A.
2015-12-01
Being one of the natural disasters, earthquake can seriously damage buildings, urban facilities and cause road blockage. Post-earthquake route planning is problem that has been addressed in frequent researches. The main aim of this research is to present a route planning model for after earthquake. It is assumed in this research that no damage data is available. The presented model tries to find the optimum route based on a number of contributing factors which mainly indicate the length, width and safety of the road. The safety of the road is represented by a number of criteria such as distance to faults, percentage of non-standard buildings and percentage of high buildings around the route. An integration of genetic algorithm and ordered weighted averaging operator is employed in the model. The former searches the problem space among all alternatives, while the latter aggregates the scores of road segments to compute an overall score for each alternative. Ordered weighted averaging operator enables the users of the system to evaluate the alternative routes based on their decision strategy. Based on the proposed model, an optimistic user tries to find the shortest path between the two points, whereas a pessimistic user tends to pay more attention to safety parameters even if it enforces a longer route. The results depicts that decision strategy can considerably alter the optimum route. Moreover, post-earthquake route planning is a function of not only the length of the route but also the probability of the road blockage.
Numerical model for the evaluation of Earthquake effects on a magmatic system.
NASA Astrophysics Data System (ADS)
Garg, Deepak; Longo, Antonella; Papale, Paolo
2016-04-01
A finite element numerical model is presented to compute the effect of an Earthquake on the dynamics of magma in reservoirs with deformable walls. The magmatic system is hit by a Mw 7.2 Earthquake (Petrolia/Capo Mendocina 1992) with hypocenter at 15 km diagonal distance. At subsequent times the seismic wave reaches the nearest side of the magmatic system boundary, travels through the magmatic fluid and arrives to the other side of the boundary. The modelled physical system consists in the magmatic reservoir with a thin surrounding layer of rocks. Magma is considered as an homogeneous multicomponent multiphase Newtonian mixture with exsolution and dissolution of volatiles (H2O+CO2). The magmatic reservoir is made of a small shallow magma chamber filled with degassed phonolite, connected by a vertical dike to a larger deeper chamber filled with gas-rich shoshonite, in condition of gravitational instability. The coupling between the Earthquake and the magmatic system is computed by solving the elastostatic equation for the deformation of the magmatic reservoir walls, along with the conservation equations of mass of components and momentum of the magmatic mixture. The characteristic elastic parameters of rocks are assigned to the computational domain at the boundary of magmatic system. Physically consistent Dirichlet and Neumann boundary conditions are assigned according to the evolution of the seismic signal. Seismic forced displacements and velocities are set on the part of the boundary which is hit by wave. On the other part of boundary motion is governed by the action of fluid pressure and deviatoric stress forces due to fluid dynamics. The constitutive equations for the magma are solved in a monolithic way by space-time discontinuous-in-time finite element method. To attain additional stability least square and discontinuity capturing operators are included in the formulation. A partitioned algorithm is used to couple the magma and thin layer of rocks. The
The 1999 Izmit, Turkey, earthquake: A 3D dynamic stress transfer model of intraearthquake triggering
Harris, R.A.; Dolan, J.F.; Hartleb, R.; Day, S.M.
2002-01-01
Before the August 1999 Izmit (Kocaeli), Turkey, earthquake, theoretical studies of earthquake ruptures and geological observations had provided estimates of how far an earthquake might jump to get to a neighboring fault. Both numerical simulations and geological observations suggested that 5 km might be the upper limit if there were no transfer faults. The Izmit earthquake appears to have followed these expectations. It did not jump across any step-over wider than 5 km and was instead stopped by a narrower step-over at its eastern end and possibly by a stress shadow caused by a historic large earthquake at its western end. Our 3D spontaneous rupture simulations of the 1999 Izmit earthquake provide two new insights: (1) the west- to east-striking fault segments of this part of the North Anatolian fault are oriented so as to be low-stress faults and (2) the easternmost segment involved in the August 1999 rupture may be dipping. An interesting feature of the Izmit earthquake is that a 5-km-long gap in surface rupture and an adjacent 25° restraining bend in the fault zone did not stop the earthquake. The latter observation is a warning that significant fault bends in strike-slip faults may not arrest future earthquakes.
NASA Astrophysics Data System (ADS)
Grzemba, B.; Popov, V. L.; Starcevic, J.; Popov, M.
2012-04-01
Shallow earthquakes can be considered as a result of tribological instabilities, so called stick-slip behaviour [1,2], meaning that sudden slip occurs at already existing rupture zones. From a contact mechanics point of view it is clear, that no motion can arise completely sudden, the material will always creep in an existing contact in the load direction before breaking loose. If there is a measureable creep before the instability, this could serve as a precursor. To examine this theory in detail, we built up an elementary laboratory model with pronounced stick-slip behaviour. Different material pairings, such as steel-steel, steel-glass and marble-granite, were analysed at different driving force rates. The displacement was measured with a resolution of 8 nm. We were able to show that a measureable accelerated creep precedes the instability. Near the instability, this creep is sufficiently regular to serve as a basis for a highly accurate prediction of the onset of macroscopic slip [3]. In our model a prediction is possible within the last few percents of the preceding stick time. We are hopeful to extend this period. Furthermore, we showed that the slow creep as well as the fast slip can be described very well by the Dieterich-Ruina-friction law, if we include the contribution of local contact rigidity. The simulation meets the experimental curves over five orders of magnitude. This friction law was originally formulated for rocks [4,5] and takes into account the dependency of the coefficient of friction on the sliding velocity and on the contact history. The simulations using the Dieterich-Ruina-friction law back up the observation of a universal behaviour of the creep's acceleration. We are working on several extensions of our model to more dimensions in order to move closer towards representing a full three-dimensional continuum. The first step will be an extension to two degrees of freedom to analyse the interdependencies of the instabilities. We also plan
Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region
NASA Astrophysics Data System (ADS)
Gülen, L.; Wp2 Team
2010-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare
NASA Astrophysics Data System (ADS)
Akinci, Aybige; Antonioli, Andrea
2013-03-01
The 2011 October 23 Van earthquake occurred at 13:41 local time in Eastern Turkey with an epicentre at 43.36oE, 38.76oN (Kandilli Observatory Earthquake Research Institute (KOERI)), 16 km north-northeast of the city of Van, killing around 604 people and leaving thousands homeless. This work presents an overview of the main features of the seismic ground shaking during the Van earthquake. We analyse the ground motion characteristics of the mainshock in terms of peak ground acceleration (PGA), peak ground velocity (PGV) and spectral accelerations (SA, 5 per cent of critical damping). In order to understand the characteristics of the ground motions induced by the mainshock, we also study the site response of the strong motion stations that recorded the seismic sequence. The lack of seismic recordings in this area imposes major constraints on the computation of reliable seismic hazard estimates for sites in this part of the country. Towards this aim, we have used a stochastic method to generate high frequency ground motion synthetics for the Mw 7.1 Van 2011 earthquake. The source mechanism of the Van event and regional wave propagation parameters are constrained from the available and previous studies. The selected model parameters are then validated against recordings. We also computed the residuals for the ground motion parameters in terms of PGA and PGV at each station and the model parameter bias by averaging the residuals over all the stations. The attenuation of the simulated ground motion parameters is compared with recent global and regional ground motion prediction equations. Finally, since it has been debated whether the earthquake of November 9 was an aftershock of the October 23 earthquake, we examine whether static variation of Coulomb stress could contribute to the observed aftershock triggering during the 2011 Van Lake sequence. Comparison with empirical ground motion prediction illustrated that the observed PGA data decay faster than the global
NASA Astrophysics Data System (ADS)
Hestetune, B.; Lowry, A. R.
2014-12-01
The 2004 Mw 9.2 Sumatra-Andaman great earthquake has been studied extensively. Most studies have inferred a combination of afterslip and viscoelastic mechanisms to be responsible for the postseismic deformation. Further off the coast of the Andaman islands, the northern terminus of the Ninety-East ridge has historically been a hotspot for seismic activity. On April 11, 2012, the largest intraplate strike-slip earthquakes recorded, Mw = 8.6 and Mw = 8.2, occurred separated by two hours and around 100 km. Previous studies have shown that these events are difficult to constrain geodetically due to the region's complexities and lack of data density, but there are hints that they excited transient slip rate changes on the Andaman portion of the megathrust more than 1000 km away. Despite the attention these large events have deservedly received, there are additional constraints that can be brought to bear in more rigorous dynamical modeling than has been done thus far. In this presentation we use models developed by Sylvain Barbot (RELAX), E.A. Hetland and more basic finite element methods to examine the slip and viscous flow dynamics excited by these large events. We will process data from both the Andaman-Nicobar postseismic GPS array and Sumatran GPS array (SuGAr) to provide additional constraint on postseismic deformation processes, and examine the consistency of inferred slip processes excited by the 2004 and 2012 events. We will use CHAMP magnetic data to infer geometry and depths to the base of the seismogenic zone, assuming that it is found immediately above the high-susceptibility serpentinite body as in Cascadia. Gravity data from both GRACE and CHAMP also will be used to characterize coseismic slip and postseismic viscoelastic flow along the Sunda-Andaman arc.
Tsunami Modeling to Validate Slip Models of the 2007 M w 8.0 Pisco Earthquake, Central Peru
NASA Astrophysics Data System (ADS)
Ioualalen, M.; Perfettini, H.; Condo, S. Yauri; Jimenez, C.; Tavera, H.
2013-03-01
Following the 2007, August 15th, M w 8.0, Pisco earthquake in central Peru, Sladen et al. (J Geophys Res 115: B02405, 2010) have derived several slip models of this event. They inverted teleseismic data together with geodetic (InSAR) measurements to look for the co-seismic slip distribution on the fault plane, considering those data sets separately or jointly. But how close to the real slip distribution are those inverted slip models? To answer this crucial question, the authors generated some tsunami records based on their slip models and compared them to DART buoys, tsunami records, and available runup data. Such an approach requires a robust and accurate tsunami model (non-linear, dispersive, accurate bathymetry and topography, etc.) otherwise the differences between the data and the model may be attributed to the slip models themselves, though they arise from an incomplete tsunami simulation. The accuracy of a numerical tsunami simulation strongly depends, among others, on two important constraints: (i) A fine computational grid (and thus the bathymetry and topography data sets used) which is not always available, unfortunately, and (ii) a realistic tsunami propagation model including dispersion. Here, we extend Sladen's work using newly available data, namely a tide gauge record at Callao (Lima harbor) and the Chilean DART buoy record, while considering a complete set of runup data along with a more realistic tsunami numerical that accounts for dispersion, and also considering a fine-resolution computational grid, which is essential. Through these accurate numerical simulations we infer that the InSAR-based model is in better agreement with the tsunami data, studying the case of the Pisco earthquake indicating that geodetic data seems essential to recover the final co-seismic slip distribution on the rupture plane. Slip models based on teleseismic data are unable to describe the observed tsunami, suggesting that a significant amount of co-seismic slip may have
Slow earthquakes triggered by typhoons.
Liu, ChiChing; Linde, Alan T; Sacks, I Selwyn
2009-06-11
The first reports on a slow earthquake were for an event in the Izu peninsula, Japan, on an intraplate, seismically active fault. Since then, many slow earthquakes have been detected. It has been suggested that the slow events may trigger ordinary earthquakes (in a context supported by numerical modelling), but their broader significance in terms of earthquake occurrence remains unclear. Triggering of earthquakes has received much attention: strain diffusion from large regional earthquakes has been shown to influence large earthquake activity, and earthquakes may be triggered during the passage of teleseismic waves, a phenomenon now recognized as being common. Here we show that, in eastern Taiwan, slow earthquakes can be triggered by typhoons. We model the largest of these earthquakes as repeated episodes of slow slip on a reverse fault just under land and dipping to the west; the characteristics of all events are sufficiently similar that they can be modelled with minor variations of the model parameters. Lower pressure results in a very small unclamping of the fault that must be close to the failure condition for the typhoon to act as a trigger. This area experiences very high compressional deformation but has a paucity of large earthquakes; repeating slow events may be segmenting the stressed area and thus inhibiting large earthquakes, which require a long, continuous seismic rupture. PMID:19516339
Ma, Z.; Fu, Z.; Zhang, Y.; Wang, C.; Zhang, G.; Liu, D.
1989-01-01
Mainland China is situated at the eastern edge of the Eurasian seismic system and is the largest intra-continental region of shallow strong earthquakes in the world. Based on nine earthquakes with magnitudes ranging between 7.0 and 7.9, the book provides observational data and discusses successes and failures of earthquake prediction. Derived from individual earthquakes, observations of various phenomena and seismic activities occurring before and after earthquakes, led to the establishment of some general characteristics valid for earthquake prediction.
NASA Astrophysics Data System (ADS)
Kedar, S.; Bock, Y.; Moore, A. W.; Argus, D. F.; Fang, P.; Liu, Z.; Haase, J. S.; Su, L.; Owen, S. E.; Goldberg, D.; Squibb, M. B.; Geng, J.
2015-12-01
Postseismic deformation indicates a viscoelastic response of the lithosphere. It is critical, then, to identify and estimate the extent of postseismic deformation in both space and time, not only for its inherent information on crustal rheology and earthquake physics, but also since it must considered for plate motion models that are derived geodetically from the "steady-state" interseismic velocities, models of the earthquake cycle that provide interseismic strain accumulation and earthquake probability forecasts, as well as terrestrial reference frame definition that is the basis for space geodetic positioning. As part of the Solid Earth Science ESDR System) SESES project under a NASA MEaSUREs grant, JPL and SIO estimate combined daily position time series for over 1800 GNSS stations, both globally and at plate boundaries, independently using the GIPSY and GAMIT software packages, but with a consistent set of a prior epoch-date coordinates and metadata. The longest time series began in 1992, and many of them contain postseismic signals. For example, about 90 of the global GNSS stations out of more than 400 that define the ITRF have experienced one or more major earthquakes and 36 have had multiple earthquakes; as expected, most plate boundary stations have as well. We quantify the spatial (distance from rupture) and temporal (decay time) extent of postseismic deformation. We examine parametric models (log, exponential) and a physical model (rate- and state-dependent friction) to fit the time series. Using a PCA analysis, we determine whether or not a particular earthquake can be uniformly fit by a single underlying postseismic process - otherwise we fit individual stations. Then we investigate whether the estimated time series velocities can be directly used as input to plate motion models, rather than arbitrarily removing the apparent postseismic portion of a time series and/or eliminating stations closest to earthquake epicenters.
Numerical Simulation of the Transport and Speciation of Nonconservative Chemical Reactants in Rivers
NASA Astrophysics Data System (ADS)
Chapman, Bernard M.
1982-02-01
A computer model, previously used to simulate the transport of conservative chemical components in streams, has been used as a basis for a more complex model which includes the effects due to processes such as precipitation and sedimentation and adsorption onto stationary reactive surfaces which render the reactants nonconservative with respect to the flowing waters. The model uses, as before, the program MINEQL as a basis for the chemical equilibrium submodel. The physical transport submodel employs a convolution integral procedure, with an approximate form of the impulse function to solve a one-dimensional convective-diffusion equation. Although the model essentially assumes chemical equilibrium, a pseudokinetic treatment is necessary to deal with redissolution of precipitates and dissociation of surface species. Simple hypothetical examples are given to illustrate the operation of the model. The model is then applied to an experiment in which the base NaOH is injected into a creek draining an abandoned base metal mine. Concentrations of the metals Zn, Al, Cu, Fe, and Na in the flowing waters, expressed in terms of total metals and as suspended solids, are followed as a function of time and distance downstream. Significant sedimentation of the precipitates formed is evident, and the existence of substantial quantities of protons and/or metal ions, adsorbed on the streambed, is implied from the model calculations. The model was able to simulate successfully the major features observed. This simulation involved the simultaneous formation of five distinct precipitates and one surface species.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Gomba, Giorgio; Eineder, Michael
2016-04-01
The use of L-band InSAR data for observing the surface displacements caused by earthquakes can be very beneficial. The retrieved signal is generally more stable against temporal phase decorrelation with respect to C-band and X-band InSAR data, such that fault movements also in vegetated areas can be observed. Also, due to the longer wavelength, larger displacement gradients that occur close to the ruptures can be measured. A serious draw back of L-band data on the other hand is that it more strongly reacts to heterogeneities in the ionosphere. The spatial variability of the electron content causes spatially long wavelength trends in the interferometric phase, distorts the surface deformation signal therefore impacts on the earthquake source analysis. A well-known example of the long-wavelength distortions are the ALOS-1 InSAR observations of the 2008 Wenchuan earthquake. To mitigate the effect of ionospheric phase in the geodetic modelling of earthquake sources, a common procedure is to remove any obvious linear or quadratic trend in the surface displacement data that may have been caused by ionospheric phase delays. Additionally, remaining trends may be accounted for by including so-called ambiguity (or nuisance) parameters in the modelling. The introduced ionospheric distortion, however, is only approximated arbitrarily by such simple ramp functions with the true ionospheric phase screen unknown. As a consequence, either a remaining ionospheric signal may be mistaken for surface displacement or, the other way around, long-wavelength surface displacement may be attributed to ionospheric distortion and is removed. The bias introduced to the source modelling results by the assumption of linear or quadratic ionospheric effects is therefore unknown as well. We present a more informed and physics-based correction of the surface displacement data in earthquake source modelling by using a split-spectrum method to estimate the ionospheric phase screen superimposed to the
NASA Astrophysics Data System (ADS)
Donnellan, A.; Rundle, J. B.; Grant Ludwig, L.; McLeod, D.; Pierce, M.; Fox, G.; Al-Ghanmi, R. A.; Parker, J. W.; Granat, R. A.; Lyzenga, G. A.; Ma, Y.; Glasscoe, M. T.; Ji, J.; Wang, J.; Gao, X.; Quakesim Team
2010-12-01
QuakeSim is a computational infrastructure for studying, modeling, and forecasting earthquakes from a system perspective. QuakeSim takes into account the entire earthquake cycle of strain accumulation and release, requiring crustal deformation data as a key data source. Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data provide current crustal deformation rates, while paleoseismic data provide long-term fault slip rates and earthquake history. The QuakeTables federated multimedia database contains spaceborne and UAVSAR InSAR data for the California region as well as paleoseismic fault data from a number of self-consistent datasets, such as the Uniform California Earthquake Rupture Forecast (UCERF), California Geological Survey (CGS), and Virtual California. Access to QuakeTables is provided through a web interface and a Web Services based application program interface (API) for data delivery. Data are categorized into self-consistent datasets that can be queried in their original form or a derivation therefrom. QuakeTables provides access to mapping features through a web interface, that provides users with direct access to the QuakeTables federated data. Users can browse, map and navigate the available datasets. QuakeSim applications include crustal deformation modeling and pattern analysis. The crustal deformation tools include forward elastic dislocation models (DISLOC) and 3D viscoelastic finite element models (GeoFEST), and elastic inversions of crustal deformation data (SIMPLEX). The tools support mapping and applications for visualizing results in vector or interfermetric form. Virtual California simulates interacting fault systems. Pattern analysis tools include RDAHMM for identifying state changes in time series data, and RIPI for identifying hotspot locations of increased probabilities for magnitude 5 and above earthquakes. The QuakeSim infrastructure automatically posts UAVSAR data to QuakeTables for storage and
NASA Astrophysics Data System (ADS)
Wang, Lifeng; Hainzl, Sebastian; Mai, P. Martin
2015-12-01
The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter timescales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of ~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.
Dynamics of earthquake nucleation process represented by the Burridge-Knopoff model
NASA Astrophysics Data System (ADS)
Ueda, Yushi; Morimoto, Shouji; Kakui, Shingo; Yamamoto, Takumi; Kawamura, Hikaru
2015-09-01
Dynamics of earthquake nucleation process is studied on the basis of the one-dimensional Burridge-Knopoff (BK) model obeying the rate- and state-dependent friction (RSF) law. We investigate the properties of the model at each stage of the nucleation process, including the quasi-static initial phase, the unstable acceleration phase and the high-speed rupture phase or a mainshock. Two kinds of nucleation lengths L sc and L c are identified and investigated. The nucleation length L sc and the initial phase exist only for a weak frictional instability regime, while the nucleation length L c and the acceleration phase exist for both weak and strong instability regimes. Both L sc and L c are found to be determined by the model parameters, the frictional weakening parameter and the elastic stiffness parameter, hardly dependent on the size of an ensuing mainshock. The sliding velocity is extremely slow in the initial phase up to L sc , of order the pulling speed of the plate, while it reaches a detectable level at a certain stage of the acceleration phase. The continuum limits of the results are discussed. The continuum limit of the BK model lies in the weak frictional instability regime so that a mature homogeneous fault under the RSF law always accompanies the quasi-static nucleation process. Duration times of each stage of the nucleation process are examined. The relation to the elastic continuum model and implications to real seismicity are discussed.
NASA Astrophysics Data System (ADS)
Sato, Toshinori; Higuchi, Harutaka; Miyauchi, Takahiro; Endo, Kaori; Tsumura, Noriko; Ito, Tanio; Noda, Akemi; Matsu'ura, Mitsuhiro
2016-02-01
In the southern Kanto region of Japan, where the Philippine Sea plate is descending at the Sagami trough, two different types of large interplate earthquakes have occurred repeatedly. The 1923 (Taisho) and 1703 (Genroku) Kanto earthquakes characterize the first and second types, respectively. A reliable source model has been obtained for the 1923 event from seismological and geodetical data, but not for the 1703 event because we have only historical records and paleo-shoreline data about it. We developed an inversion method to estimate fault slip distribution of interplate repeating earthquakes from paleo-shoreline data on the idea of crustal deformation cycles associated with subduction-zone earthquakes. By applying the inversion method to the present heights of the Genroku and Holocene marine terraces developed along the coasts of the southern Boso and Miura peninsulas, we estimated the fault slip distribution of the 1703 Genroku earthquake as follows. The source region extends along the Sagami trough from the Miura peninsula to the offing of the southern Boso peninsula, which covers the southern two thirds of the source region of the 1923 Kanto earthquake. The coseismic slip takes the maximum of 20 m at the southern tip of the Boso peninsula, and the moment magnitude (Mw) is calculated as 8.2. From the interseismic slip-deficit rates at the plate interface obtained by GPS data inversion, assuming that the total slip deficit is compensated by coseismic slip, we can roughly estimate the average recurrence interval as 350 years for large interplate events of any type and 1400 years for the Genroku-type events.
Modeling Injection Induced Seismicity with Poro-Elasticity and Time-Dependent Earthquake Nucleation
NASA Astrophysics Data System (ADS)
Lu, S.; Segall, P.
2014-12-01
The standard approach to modeling injection-induced seismicity (IIS) considers Coulomb failure stress changes accounting only for pore-pressure changes, which are solved by the diffusion equation. However, this "diffusion" triggering mechanism is not comprehensive. Lab experiments indicate earthquake nucleation also depends on stress history. Here we add two effects in modeling IIS: 1) poro-elastic coupling between solid stresses and pore-pressure, and 2) time dependent earthquake nucleation under applied stresses. In this model, we compute stress and pore-pressure changes due to a point source injecting in a homogeneous, poro-elastic full space (Rudnicki, 1986). The Coulomb stress history is used to compute seismicity rate changes based on the time-dependent nucleation model of Dieterich (1994). Our new model reveals: 1) poro-elastic coupling breaks the radial symmetry in seismicity, 2) nucleation introduces a characteristic nucleation time ta, which affects the temporal evolution of seismicity rates, and 3) for some fault geometries, the seismicity rate may increase following shut in. For constant injection flux, the log of seismicity rate scales with the change in Coulomb stress at short time, consistent with diffusion profiles. At longer time, the model predicts seismicity rates decaying with time, consistent with some observations. The contour shape and decay time are characterized by ta. For finite injection with box-car flux history, seismicity rates plummet near the injector, but may continue for some time at greater distance. Depending on fault orientations, seismicity rates may increase after shut-in due to coupling effects. It has been observed in some cases that the maximum magnitude of induced quakes occurs after shut-in. This may be understood by the fact that the volume of perturbed crust increases with injection time, which influences probability of triggering an event of a given magnitude. Whether coupling effects are important in post shut
Beeler, N.M.; Lockner, D.L.; Hickman, S.H.
2001-01-01
If repeating earthquakes are represented by circular ruptures, have constant stress drops, and experience no aseismic slip, then their recurrence times should vary with seismic moment as tr ?? Mo1/3. In contrast, the observed variation for small, characteristic repeating earthquakes along a creeping segment of the San Andreas fault at Parkfield (Nadeau and Johnson, 1998) is much weaker. Also, the Parkfield repeating earthquakes have much longer recurrence intervals than expected if the static stress drop is 10 MPa and if the loading velocity VL is assumed equal to the geodetically inferred slip rate of the fault Vf. To resolve these discrepancies, previous studies have assumed no aseismic slip during the interseismic period, implying either high stress drop or VL ??? Vf. In this study, we show that a model that includes aseismic slip provides a plausible alternative explanation for the Parkfield repeating earthquakes. Our model of a repeating earthquake is a fixed-area fault patch that is allowed to continuously creep and strain harden until reaching a failure threshold stress. The strain hardening is represented by a linear coefficient C, which when much greater than the elastic loading stiffness k leads to relatively small interseismic slip (stick-slip). When C and k are of similar size creep-slip occurs, in which relatively large aseismic slip accrues prior to failure. Because fault-patch stiffness varies with patch radius, if C is independent of radius, then the model predicts that the relative amount of seismic to total slip increases with increasing radius or Mo, consistent with variations in slip required to explain the Parkfield data. The model predicts a weak variation in tr with Mo similar to the Parkfield data.
NASA Astrophysics Data System (ADS)
Attanayake, Januka; Fonseca, João F. B. D.
2016-05-01
The February 22nd 2006 Mw = 7 Machaze earthquake is one of the largest, if not the largest, earthquakes reported since 1900 within Continental Africa. This large continental intraplate event has important implications to our understanding of tectonics and strong ground motion prediction locally and in the global context. Thus, accurate estimates of source parameters of this earthquake are important. In this study, we inverted the complete azimuthally distributed high frequency (0.05-2 Hz) P waveform dataset available for a best-fitting point source model and obtained stress drop estimates assuming different theoretical rupture models from spectral fitting. Our best-fitting point source model confirms steep normal faulting, has strike = 173° (309°), dip = 73° (23°), rake = -72° (-132°), and shows a 12%-4% improvement in waveform fit compared to previous models, which translates into an error minimization. We attribute this improvement to higher order reverberations near the source region that we took in to account and the excellent azimuthal coverage of the dataset. Preferred stress drop estimates assuming a rupture velocity = 0.9 x shear wave velocity (Vs) are between 11 and 15 MPa though, even higher stress drop estimates are possible for rupture velocities lower than 0.9Vs. The estimated stress drop is significantly higher than the global stress drop average of intraplate earthquakes, but is consistent with stress drop estimated for some intra-continental earthquakes elsewhere. The detection of a new active structure that appears to terminate in Machaze, its step-like geometry, and lithospheric strength all favors a hypothesis of stress concentration in the source region, which is likely the cause of this event and the higher than average stress drop.
NASA Astrophysics Data System (ADS)
Hough, S. E.; Martin, S.
2013-12-01
(2013) compilation and the Global Earthquake Model (GEM) catalog released in June, 2013. The GEM catalog includes three 19th century earthquakes of M8.5 and three M8.4s, and no 19th century earthquakes larger than 8.5. Cumulative moment release rates are notoriously difficult to estimate, but using the Hough (2013) compilation the 19th century moment release rate appears to be roughly half of the rate during the instrumental era; using the GEM catalog the 19th century rate appears to be roughly ¼ the instrumental rate. Thus, either 1) the global moment release rate varies by a factor of two or more on century time scales, or 2) the best available historical catalogs significantly underestimate great earthquake magnitudes and overall moment release rates. One can also consider whether magnitudes of great earthquakes were systematically underestimated during the first half of the 20th century, prior to the advent of long-period seismometry. We consider whether the 19th century moment release rate can be made consistent with the rate during the instrumental era using individual event magnitudes within the uncertainties estimated by past published studies. Lastly we consider the expected variability in global moment release rate, assuming a linear b-value up to Mmax9.5 and a Poissonian rate.
NASA Astrophysics Data System (ADS)
Sagiya, T.
2013-12-01
Before the 2011 M9.0 Tohoku-oki earthquake, rapid subsidence more than 5mm/yr has been observed along the Pacific coast of the Tohoku area by leveling, tide gauges, and GPS (Kato, 1979, Kato and Tsumura, 1979, El-Fiky and Kato, 1999). On the other hand, Stage 5e (~125 ka) marine terraces are widely recognized along the same area, implying the area is uplifting in a long-term. Ikeda (1999) hypothesized that these deformation signals reflect accumulation of elastic strain at the plate interface and there is a possibility of a giant earthquake causing a coastal uplift. However, the coastal area subsided as large as 1m during the 2011 main shock. Though we observe significant postseismic uplift, it is not certain if the preseismic as well as coseismic subsidence will be recovered. We construct a simple model of earthquake deformation cycle to interpret the vertical movement along the Pacific coast of northeast Japan. The model consists of a 40 km thick elastic lithosphere overlying a Maxwell viscoelastic asthenospher with a viscosity of 10^19 Pa s. Plate boundary is modeled as two rectangular faults located in the lithosphere and connected each other. As for the kinematic conditions of these faults, we represent the temporal evolution of fault slip as a sum of the steady term and the perturbation term following Savage and Prescott (1978). The first steady term corresponds to the long-term plate subduction, which contributes to long-term geomorphic evolution such as the marine terraces (Hashimoto et al., 2004). The second perturbation term represent earthquake cycle effects. We evaluate this effect under assumptions that earthquake occurrence is perfectly periodic, plate interface is fully coupled during interseismic periods, and the slip deficit is fully released by earthquakes. If the earthquake recurrence interval is shorter than the relaxation time of the structure, interseismic movement is in the opposite direction to the coseismic ones and changes almost linearly
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
Slip model of the 2015 Mw 7.8 Gorkha (Nepal) earthquake from inversions of ALOS-2 and GPS data
NASA Astrophysics Data System (ADS)
Wang, Kang; Fialko, Yuri
2015-09-01
We use surface deformation measurements including Interferometric Synthetic Aperture Radar data acquired by the ALOS-2 mission of the Japanese Aerospace Exploration Agency and Global Positioning System (GPS) data to invert for the fault geometry and coseismic slip distribution of the 2015 Mw 7.8 Gorkha earthquake in Nepal. Assuming that the ruptured fault connects to the surface trace of the Main Frontal Thrust (MFT) fault between 84.34°E and 86.19°E, the best fitting model suggests a dip angle of 7°. The moment calculated from the slip model is 6.08 × 1020 Nm, corresponding to the moment magnitude of 7.79. The rupture of the 2015 Gorkha earthquake was dominated by thrust motion that was primarily concentrated in a 150 km long zone 50 to 100 km northward from the surface trace of the Main Frontal Thrust (MFT), with maximum slip of ˜ 5.8 m at a depth of ˜8 km. Data thus indicate that the 2015 Gorkha earthquake ruptured a deep part of the seismogenic zone, in contrast to the 1934 Bihar-Nepal earthquake, which had ruptured a shallow part of the adjacent fault segment to the east.
Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.
2006-01-01
Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.
Tseng, W.S.; Lihanand, K.; Ostadan, F.; Tuann, S.Y. )
1991-10-01
This report presents the results of post-prediction earthquake response data analyses performed to identify the test system parameters for the 1/4-scale containment model of the Large-Scale Seismic Test (LSST) in Lotung, Taiwan and the results of post- prediction analytical earthquake parametric studies conducted to evaluate the applicabilities of four soil-structure interaction (SSI) analysis methods which have frequently been applied in the US nuclear industry. These four methods evaluated were: (1) the soil-spring method; (2) the CLASSI continuum halfspace substructuring method; (3) the SASSI finite element substructuring method; and (4) the FLUSH finite element direct method. Earthquake response data recorded on the containment and internal structure (steam generator and piping) for four earthquake events (LSST06, LSST07, LSST12, and LSST16) having peak ground accelerations ranging from 0.04 g to 0.21 g have been analyzed. The containment SSI system and the internal structure system frequencies and associated modal damping ratios consistent with ground shaking intensity of each event were identified. These results along with the site soil parameters identified from separate free-field soil response data analyses were used as the basis for refining the blind-prediction SSI analysis models for each of the four analysis methods evaluated. 12 refs., 5 figs.
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Zhong; Wicks, Chuck
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.
Nontrivial static, spherically symmetric vacuum solution in a nonconservative theory of gravity
NASA Astrophysics Data System (ADS)
Oliveira, A. M.; Velten, H. E. S.; Fabris, J. C.
2016-06-01
We analyze the vacuum static spherically symmetric spacetime for a specific class of nonconservative theories of gravity based on Rastall's theory. We obtain a new vacuum solution which has the same structure as the Schwarzschild-de Sitter solution in the general relativity theory obtained with a cosmological constant playing the role of source. We further discuss the structure (in particular, the coupling to matter fields) and some cosmological aspects of the underline nonconservative theory.
NASA Astrophysics Data System (ADS)
Shibazaki, B.; Tsutsumi, A.; Shimamoto, T.; Noda, H.
2012-12-01
Some observational studies [e.g. Hasegawa et al., 2011] suggested that the 2011 great Tohoku-oki Earthquake (Mw 9.0) released roughly all of the accumulated elastic strain on the plate interface owing to considerable weakening of the fault. Recent studies show that considerable weakening can occur at a high slip velocity because of thermal pressurization or thermal weakening processes [Noda and Lapusta, 2010; Di Toro et al., 2011]. Tsutsumi et al. [2011] examined the frictional properties of clay-rich fault materials under water-saturated conditions and found that velocity weakening or strengthening occurs at intermediate slip velocities and that dramatic weakening occurs at high slip velocities. This dramatic weakening at higher slip velocities is caused by pore-fluid pressurization via frictional heating or gouge weakening. In the present study, we investigate the generation mechanism of megathrust earthquakes along the Japan trench by performing 3D quasi-dynamic modeling with high-speed friction or thermal pressurization. We propose a rate- and state-dependent friction law with two state variables that exhibit weak velocity weakening or strengthening with a small critical displacement at low to intermediate velocities, but a strong velocity weakening with a large critical displacement at high slip velocities [Shibazaki et al., 2011]. We use this friction law for 3D quasi-dynamic modeling of a cycle of the great Tohoku-oki earthquake. We set several asperities where velocity weakening occurs at low to intermediate slip velocities. Outside of the asperities, velocity strengthening occurs at low to intermediate slip velocities. At high slip velocities, strong velocity weakening occurs both within and outside of the asperities. The rupture of asperities occurs at intervals of several tens of years, whereas megathrust events occur at much longer intervals (several hundred years). Megathrust slips occur even in regions where velocity strengthening occurs at low to
Boundary element modeling of earthquake site effects including the complete incident wavefield
NASA Astrophysics Data System (ADS)
Kim, Kyoung-Tae
Numerical modeling of earthquake site effects in realistic, three-dimensional structures, including high frequencies, low surface velocities and surface topography, has not been possible simply because the amount of computer memory constrains the number of grid points available. In principle, this problem is reduced in the Boundary Element Method (BEM) since only the surface of the velocity discontinuity is discretized; wave propagation both inside and outside this boundary is computed analytically. Equivalent body forces are determined on the boundary by solving a matrix equation containing frequency-domain displacement and stress Green's functions from every point on the boundary to every other point. This matrix problem has imposed a practical limit on the size or maximum frequency of previous BEM models. Although the matrix can be quite large, it also seems to be fairly sparse. We have used iterative matrix algorithms of the PETSc package and direct solution algorithms of the ScaLAPACK on the massively parallel supercomputers at Cornell, San Diego and Michigan. Preconditioning has been applied using blockwise ILU decomposition for the iterative approach or LU decomposition for the direct approach. The matrix equation is solved using the GMRES method for the iterative approach and a tri-diagonal solver for the direct approach. Previous BEM applications typically have assumed a single, incident plane wave. However, it is clear that for more realistic ground motion simulations, we need to consider the complete incident wavefield. If we assume that the basin or three-dimensional structure of interest is embedded in a surrounding plane-layered medium, we may use the propagator matrix method to solve for the displacements and stresses at depth on the boundary. This is done in the frequency domain with integration over wavenumber so that all P, S, mode conversions, reverberations and surface waves are included. The Boundary Element Method succeeds in modeling
Ohta-jasnow-kawasaki approximation for nonconserved coarsening under shear
Cavagna; Bray; Travasso
2000-10-01
We analytically study coarsening dynamics in a system with nonconserved scalar order parameter, when a uniform time-independent shear flow is present. We use an anisotropic version of the Ohta-Jasnow-Kawasaki approximation to calculate the growth exponents in two and three dimensions: for d=3 the exponents we find are the same as expected on the basis of simple scaling arguments, that is, 3/2 in the flow direction and 1/2 in all the other directions, while for d=2 we find an unusual behavior, in that the domains experience an unlimited narrowing for very large times and a nontrivial dynamical scaling appears. In addition, we consider the case where an oscillatory shear is applied to a two-dimensional system, finding in this case a standard t(1/2) growth, modulated by periodic oscillations. We support our two-dimensional results by means of numerical simulations and we propose to test our predictions by experiments on twisted nematic liquid crystals. PMID:11089010
Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Nyst, M.
2013-12-01
We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Balsara, Dinshaw S.
2016-01-01
In this paper a new, simple and universal formulation of the HLLEM Riemann solver (RS) is proposed that works for general conservative and non-conservative systems of hyperbolic equations. For non-conservative PDE, a path-conservative formulation of the HLLEM RS is presented for the first time in this paper. The HLLEM Riemann solver is built on top of a novel and very robust path-conservative HLL method. It thus naturally inherits the positivity properties and the entropy enforcement of the underlying HLL scheme. However, with just the slight additional cost of evaluating eigenvectors and eigenvalues of intermediate characteristic fields, we can represent linearly degenerate intermediate waves with a minimum of smearing. For conservative systems, our paper provides the easiest and most seamless path for taking a pre-existing HLL RS and quickly and effortlessly converting it to a RS that provides improved results, comparable with those of an HLLC, HLLD, Osher or Roe-type RS. This is done with minimal additional computational complexity, making our variant of the HLLEM RS also a very fast RS that can accurately represent linearly degenerate discontinuities. Our present HLLEM RS also transparently extends these advantages to non-conservative systems. For shallow water-type systems, the resulting method is proven to be well-balanced. Several test problems are presented for shallow water-type equations and two-phase flow models, as well as for gas dynamics with real equation of state, magnetohydrodynamics (MHD & RMHD), and nonlinear elasticity. Since our new formulation accommodates multiple intermediate waves and has a broader applicability than the original HLLEM method, it could alternatively be called the HLLI Riemann solver, where the "I" stands for the intermediate characteristic fields that can be accounted for.
ERIC Educational Resources Information Center
Stein, Ross S.; Yeats, Robert S.
1989-01-01
Points out that large earthquakes can take place not only on faults that cut the earth's surface but also on blind faults under folded terrain. Describes four examples of fold earthquakes. Discusses the fold earthquakes using several diagrams and pictures. (YP)
An earthquake instability model based on faults containing high fluid-pressure compartments
Lockner, D.A.; Byerlee, J.D.
1995-01-01
results of a one-dimensional dynamic Burridge-Knopoff-type model to demonstrate various aspects of the fluid-assisted fault instability described above. In the numerical model, the fault is represented by a series of blocks and springs, with fault rheology expressed by static and dynamic friction. In addition, the fault surface of each block has associated with it pore pressure, porosity and permeability. All of these variables are allowed to evolve with time, resulting in a wide range of phenomena related to fluid diffusion, dilatancy, compaction and heating. These phenomena include creep events, diffusion-controlled precursors, triggered earthquakes, foreshocks, aftershocks, and multiple earthquakes. While the simulations have limitations inherent to 1-D fault models, they demonstrate that the fluid compartment model can, in principle, provide the rich assortment of phenomena that have been associated with earthquakes. ?? 1995 Birkha??user Verlag.
Teamwork tools and activities within the hazard component of the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Short-term earthquake forecasting based on an epidemic clustering model
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2016-04-01
The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations
Stochastic earthquake source model: the omega-square hypothesis and the directivity effect
NASA Astrophysics Data System (ADS)
Molchan, G.
2015-07-01
Recently A. Gusev suggested and numerically investigated the doubly stochastic earthquake source model. The model is supposed to demonstrate the following features in the far-field body waves: (1) the omega-square high-frequency (HF) behaviour of displacement spectra; (2) lack of the directivity effect in HF radiation; and (3) a stochastic nature of the HF signal component. The model involves two stochastic elements: the local stress drop (SD) on a fault and the rupture time function (RT) with a linear dominant component. The goal of the present study is to investigate the Gusev model theoretically and to find conditions for (1, 2) to be valid and stable relative to receiver site. The models with smooth elements SD, RT are insufficient for these purposes. Therefore, SD and RT are treated as realizations of stochastic fields of the fractal type. The local smoothness of such fields is characterized by the fractional (Hurst) exponent H, 0 < H < 1. This allows us to consider a wide class of stochastic functions without regard to their global spectral properties. We show that the omega-square behavior of the model is achieved approximately if the rupture time function is almost regular (H˜1) while the stress drop is rough function of any index H. However, if the rupture front is linear, the local stress drop has to be function of minimal smoothness (H˜0). The situation with the directivity effect is more complicated: for different RT models with the same fractal index, the effect may or may not occur. The nature of the phenomenon is purely analytical. The main controlling factor for the directivity is the degree of smoothness of the 2-D distributions of RT random function. For this reason the directivity effect is unstable. This means that in practice the opposite conclusions relative to the statistical significance of the directivity effect are possible.
Dewey, J.W.
1991-01-01
Joint epicenter determination of earthquakes that occurred in northern Algeria near Ech Cheliff (named Orleansville in 1954 and El Asnam in 1980) shows that the earthquake of 9 September 1954 (M=6.5) occurred at nearly the same location as the earthquake of 10 October 1980 (M=7.3). The 1954 main shock and earliest aftershocks were concentrated close to the boundaries of segment B (nomenclature of Deschamps et al., 1982; King and Yielding, 1984) of the 1980 fault system, which was to experience approximately 8 m of slip in the 1980 earthquake. Later aftershocks of the 1954 earthquake were spread over a broad area, notably in a region north of the 1980 fault system that also experienced many aftershocks to the 1980 earthquake. The closeness of the 1954 main shock and earliest aftershocks to the 1980 segment B implies that the 1954 earthquake involved either 1) rupture of segment B proper, or 2) rupture of a distinct fault in the hanging wall of footwall block of segment B. -from Author
Simpson, Robert W.
1994-01-01
If there is a single theme that unifies the diverse papers in this chapter, it is the attempt to understand the role of the Loma Prieta earthquake in the context of the earthquake 'machine' in northern California: as the latest event in a long history of shocks in the San Francisco Bay region, as an incremental contributor to the regional deformation pattern, and as a possible harbinger of future large earthquakes. One of the surprises generated by the earthquake was the rather large amount of uplift that occurred as a result of the reverse component of slip on the southwest-dipping fault plane. Preearthquake conventional wisdom had been that large earthquakes in the region would probably be caused by horizontal, right-lateral, strike-slip motion on vertical fault planes. In retrospect, the high topography of the Santa Cruz Mountains and the elevated marine terraces along the coast should have provided some clues. With the observed ocean retreat and the obvious uplift of the coast near Santa Cruz that accompanied the earthquake, Mother Nature was finally caught in the act. Several investigators quickly saw the connection between the earthquake uplift and the long-term evolution of the Santa Cruz Mountains and realized that important insights were to be gained by attempting to quantify the process of crustal deformation in terms of Loma Prieta-type increments of northward transport and fault-normal shortening.
Aagaard, Brad T.; Graves, Robert W.; Rodgers, Arthur; Brocher, Thomas M.; Simpson, Robert W.; Dreger, Douglas; Petersson, N. Anders; Larsen, Shawn C.; Ma, Shuo; Jachens, Robert C.
2010-01-01
We simulate long-period (T>1.0–2.0 s) and broadband (T>0.1 s) ground motions for 39 scenario earthquakes (Mw 6.7–7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault, we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions, compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area, with about 50% of the urban area experiencing modified Mercalli intensity VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland earthquake and the 2007 Mw 5.45 Alum Rock earthquake show that the U.S. Geological Survey’s Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area for Hayward fault earthquakes, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions for the suite of scenarios exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute much of this difference to the seismic velocity structure in the San Francisco Bay area and how the NGA models account for basin amplification; the NGA relations may underpredict amplification in shallow sedimentary basins. The simulations also suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by increasing the areal extent of rupture directivity with period.
NASA Astrophysics Data System (ADS)
Hsu, H.; Tseng, T.; Jian, P.; Mumladze, T.; Chung, S.; Huang, B.; Javakishvili, Z.; Chen, W.
2012-12-01
Caucasus mountain belts mark the northern terminus of the continental collision between Arabia and Eurasia. The plate convergence is predominantly in north-south direction at a rate of approximately 10-20 mm/yr across the Iranian Plateau and Caucasus region. The collision also causes the Anatolian block to extrude laterally. In the Caucasus region, earthquakes are usually in lower magnitudes (M<4). However, a few historical events are found with magnitude approaching 7 since the nineteenth century. Over the past 40 years, three large earthquakes occurred: the 1970 Dagestan (Ms= 6.5) and the 1991 Racha-Dzhava (Ms = 7.0) Georgian earthquakes in the foothill of the Greater Caucasus and the 1988 Spitak Armenian earthquake (Ms = 6.9) near the Lesser Caucasus. Due to limited stations in this area, focal mechanisms are estimated using global waveform data for primarily large earthquakes. In contrary, small earthquakes are less studied and poorly constrained. In this study, we use regional waveforms to constrain the focal mechanisms and depths of the earthquakes in the major seismic zones in the Greater Caucasus and Lesser Caucasus. Through international collaboration since 2008, we collect the data from broadband stations deployed by Institute of Earth Sciences, Academia Sinica of Taiwan, permanent stations of Global Seismographic Network and the Georgian local broadband stations for superior coverage. We examine earthquakes with magnitude above 3.5 in the study region. Preliminary results of the analyzed focal mechanism for small earthquakes are generally consistent with the large events constrained by earlier studies and the corresponding faults. We will improve the solutions with suitable parameters through systematic tests and including more available stations. With well determined focal mechanisms, we aim to better understand the stress distribution, the relation with fault system nearby and the detail tectonic structure in the Caucasus.
A Directivity Model For Moderate To Large Earthquakes Based On The Direct-Point Parameter
NASA Astrophysics Data System (ADS)
Spudich, P.; Chiou, B. S.
2013-12-01
We have developed a new model to predict directivity of pseudo-spectral acceleration in the 1- 10 second band for crustal earthquakes of magnitude exceeding 5.7. The model uses a new directivity predictor, the Direct Point Parameter DPP, which, like the Isochrone Directivity Parameter IDP of Spudich and Chiou (2013), is based on isochrone theory but has several advantages over the IDP. The DPP has a stronger theoretical underpinning than IDP has, as it accounts for the radiation pattern of a finite, line source between the hypocenter and the 'direct point', which is a special point located in a zone of higher isochrone velocity than is the IDP ';closest point', (point on the fault closest to the site where ground motions are to be evaluated). The IDP model by contrast uses a point source radiation pattern at the hypocenter. The DPP has smoother spatial variations than does the IDP. It does not depend on the location of the closest point, which can jump discontinuously from one segment of a geometrically complicated fault to another when the target site moves a small distance. Consequently, when using the DPP it is less likely a user's site will unknowingly be on the high or low side of a discontinuity in the predictor. Furthermore, the DPP is easier to calculate than the IDP because 1) the radiation pattern formulae are simpler, 2) it uses a simpler algorithm for handling multi-segment and multi-fault ruptures, and 3) a generalized coordinate transform is no longer necessary for non-planar faults. The directivity model using the DPP is 'narrowband', meaning that the strength of directivity does not rise inexorably with period but rather is maximum at some period that increases with magnitude. The DPP model is the only directivity model explicitly included in one of the NGA-West 2 ground motion prediction equations, namely Chiou and Youngs (2013).
NASA Astrophysics Data System (ADS)
Kim, M. J.; Segall, P.; Johnson, K. M.
2012-12-01
Most recent models of interseismic deformation in Cascadia have been restricted to elastic half-spaces. In this study, we investigate the interseismic deformation in the Cascadia subduction zone using a viscoelastic earthquake cycle model in order to constrain the extent of plate coupling, elastic plate thickness, and the viscoelastic relaxation time. Our model of the plate interface consists of an elastic layer overlying a Maxwell viscoelastic half-space. The fault in the elastic layer is composed of a fully locked zone that slips during megathrust events at cycle time T= 500 years, and a transition zone where the interseismic slip rate changes from zero (fully coupled) to the plate velocity (zero coupling). Slip deficit within the transition zone is accommodated by either coseismic or rapid post-seismic slip. We model the slip rate in the transition zone using the analytic solution for slip at a constant resistive stress in an elastic full space. We explore ranges for the 4 model parameters: the elastic plate thickness, the relaxation time, and the upper and the lower bounds of the transition zone - that minimize the residual between the predicted surface velocities and the observed GPS data. GPS position solutions were provided by PANGA and our data consists of 29 GPS station velocities from 2002 to 2010 in the Olympic Peninsula - southern Vancouver Island region, since this region is least affected by forearc rotation. Our preliminary result suggests a shallow fully locked zone (< 15 km depth) with a short relaxation time (< 100 years) compared to the recurrence interval (~ 500 years). For a given degree of misfit to the data, accounting for the viscoelastic effect allows deeper locking depth compared to the fully elastic model.
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.
2014-12-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of
NASA Astrophysics Data System (ADS)
Jupp, Tim E.; Pyle, David M.; Mason, Ben G.; Dade, W. Brian
2004-02-01
Evidence of nonuniformity in the rate of seismicity and volcanicity has been sought on a variety of timescales ranging from ˜12.4 hours (tidal) to 103-104 years (climatic), but the results are mixed. Here, we propose a simple conceptual model for the influence of periodic processes on the frequency of geophysical "failure events" such as earthquakes and volcanic eruptions. In our model a failure event occurs at a "failure time" tF = tI + tR which is controlled by an "initiation event" at the "initiation time" tI and by the "response time" of the system tR. We treat each of the initiation time, the response time, and the failure time as random variables. In physical terms, we define the initiation time to be the time at which a "load function" exceeds a "strength function," and we imagine that the response time tR corresponds to a physical process such as crack propagation or the movement of magma. Assuming that the magnitude and frequency of the periodic pro