Self-organization through dissipation in a non-conservative cellular-automaton model of earthquakes
NASA Astrophysics Data System (ADS)
Al-Kindy, F. H.; Main, I. G.
2003-04-01
Self-organizing systems are of particular interest from a thermodynamic perspective since they show the spontaneous emergence of pattern or order, in apparent contradiction of the second law of thermodynamics. Here we investigate self-organization in a non-conservative version of the Bak and Tang (BT) cellular-automaton model, and its dependence on dissipation for populations of synthetic earthquakes. The probability distributions of strain energy and radiated energy are used to calculate Shannon entropy S as a measure of disorder. As the conservation parameter α is decreased, the external entropy Se decreases as the internal entropy Si increases with Δ Se>0. This suggests that in the model, self-organization occurs at the expense of lowering the internal energy of the system through dissipation while increasing its entropy production globally. The results also show some similarities with recent analysis of real earthquake populations.
B Non-Conservation, Cold Dark Matter and Subpreon Model
NASA Astrophysics Data System (ADS)
Senju, H.
1990-06-01
In our model a weakly-interacting massive particle l_{s}(e) exists. The consideration of baryon number non-conserving processes which are assumed to originate in the subpreon physics shows an asymmetrical existence of it in the universe. Using the formalism of Griest and Seckel, it is shown that l_{s}(e) is a viable candidate of the cold dark matter.
Experimental consequences of a horizontal gauge model for CP nonconservation
Hou, W.; Soni, A.
1985-05-13
The experimental consequences of a model that links CP nonconservation with horizontal interactions and is based on the gauge group SU/sub l//sup W/(2) x SU/sub R//sup H/(2) x U/sup Y/ (1) are investigated. The magnitude of the observed CP nonconservation and that of the K/sub L/-K/sub S/ mass difference constrains the horizontal gauge boson masses (M/sub s/crR) such that 66 TeV> or approx. =M/sub s/crR>5 TeV. The model implies an extremely small value for Vertical Barepsilon'/epsilonVertical Bar. The branching ratio for K/sub L/..--> mu..e (K..--> pi mu..e) could be greater than roughly-equal10/sup -10/ (approx.10/sup -12/). theta/sub QFD/ vanishes at the tree level. The contribution from the gauge sector, arising at two loops, is also discussed.
Precision non-conservative force modelling for Low Earth Orbiting spacecraft
NASA Astrophysics Data System (ADS)
Sibthorpe, Anthony John
Low Earth Orbiting spacecraft are used in various ways for remote observation and measurement of system Earth some classes of measurements are only useful when modelled in a spatial reference frame. As the position of a satellite at a particular epoch is used to provide a fixed point of reference, it is vital that we know these positions both accurately and precisely. Non-conservative forces, which change the energy state of a spacecraft system, can have a dramatic effect on the estimated position of a satellite if unmodelled or, as is often the case, are modelled only crudely. Downstream Earth observation data can inherit significant errors as a result. As an example of this, it has been recognised that apparent long wavelength signals can be introduced into interferometric synthetic aperture radar (SAR) images by orbit error. Such images are used to monitor surface deformation, and may provide an indication of strain accumulation as a pre-cursor to Earthquake activity. It makes sense therefore to better model these non-conservative forces, thus improving the quality of the Earth observation data. This project develops precise methodologies for modelling of solar radiation pressure / thermal re-radiation / eclipse modelling / Earth radiation pressure / spacecraft internal heat distribution / on-board instrument power output, and applies these techniques to the European Space Agency's ENVISAT satellite. This complicated satellite has necessitated the development of a significant number of new algorithms for dealing with a large number of geometric primitives. A graphical display tool, developed during this research, allows rapid model development and improved error checking. Resultant models are incorporated into the GEODYN II orbit determination software, developed at NASA's Goddard Space Flight Centre. Precise orbits computed using tracking data in combination with the newly developed force models are compared against precise orbits generated using nominal force
An uncertainty inclusive un-mixing model to identify tracer non-conservativeness
NASA Astrophysics Data System (ADS)
Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.
2015-04-01
Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (<40°C), sieved (125 µm) and analysed for mineral magnetic susceptibility, anhysteretic remanence and iso-thermal remanence, and geochemical elements Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn (following microwave-assisted acid digestion). Discriminant analysis was used to reduce the number of tracer numbers before un-mixing. Tracer non-conservativeness
NASA Astrophysics Data System (ADS)
Luthcke, S. B.; Marshall, J. A.
1992-11-01
The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.
NASA Technical Reports Server (NTRS)
Luthcke, S. B.; Marshall, J. A.
1992-01-01
The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.
NASA Astrophysics Data System (ADS)
Charpentier, Arthur; Durand, Marilou
2015-07-01
In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.
Gravity and Nonconservative Force Model Tuning for the GEOSAT Follow-On Spacecraft
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.; Chinn, Douglas S.; Marr, Gregory C.; Smith, David E. (Technical Monitor)
2000-01-01
The US Navy's GEOSAT Follow-On spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. Three radar altimeter calibration campaigns have been conducted in 1999 and 2000. The spacecraft is tracked by satellite laser ranging (SLR) and Doppler beacons and a limited amount of data have been obtained from the Global Positioning Receiver (GPS) on board the satellite. Even with EGM96, the predicted radial orbit error due to gravity field mismodelling (to 70x70) remains high at 2.61 cm (compared to 0.88 cm for TOPEX). We report on the preliminary gravity model tuning for GFO using SLR, and altimeter crossover data. Preliminary solutions using SLR and GFO/GFO crossover data from CalVal campaigns I and II in June-August 1999, and January-February 2000 have reduced the predicted radial orbit error to 1.9 cm and further reduction will be possible when additional data are added to the solutions. The gravity model tuning has improved principally the low order m-daily terms and has reduced significantly the geographically correlated error present in this satellite orbit. In addition to gravity field mismodelling, the largest contributor to the orbit error is the non-conservative force mismodelling. We report on further nonconservative force model tuning results using available data from over one cycle in beta prime.
Non-conservative GNSS satellite modeling: long-term orbit behavior
NASA Astrophysics Data System (ADS)
Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.; Sosnica, K.; Fritsche, M.
2012-04-01
Modeling of non-conservative forces is a key issue for precise orbit determination of GNSS satellites. Furthermore, mismodeling of these forces has the potential to explain orbit-related frequencies found in GPS-derived station coordinates and geocenter, as well as the observed bias in the SLR-GPS residuals. Due to the complexity of the non-conservative forces, usually they have been compensated by empirical models based on the real in-orbit behavior of the satellites. Recent studies have focused on the physical/analytical modeling of solar radiation pressure, Earth radiation pressure, thermal effects, antenna thrust, among different effects. However, it has been demonstrated that pure physical models fail to predict the real orbit behavior with sufficient accuracy. In this study we use a recently developed solar radiation pressure model based on the physical interaction between solar radiation and satellite, but also capable of fitting the GNSS tracking data, called adjustable box-wing model. Furthermore, Earth radiation pressure and antenna thrust are included as a priori acceleration. The adjustable parameters of the box-wing model are surface optical properties, the so-called Y-bias and a parameter capable of compensating for non-nominal orientation of the solar panels. Using the adjustable box-wing model a multi-year GPS/GLONASS solution has been computed, using a processing scheme derived from CODE (Center for Orbit Determination in Europe). This multi-year solution allows studying the long-term behavior of satellite orbits, box-wing parameters and geodetic parameters like station coordinates and geocenter. Moreover, the accuracy of GNSS orbits is assessed by using SLR data. This evaluation also allows testing, whether the current SLR-GPS bias could be further reduced.
NASA Astrophysics Data System (ADS)
Stepanek, P.; Rodriguez-Solano, C.; Filler, V.; Hugentobler, U.
2011-12-01
The focus of the studies is the analysis of the comparison between two different approaches for LEO satellite orbit estimation employing DORIS measurements. The first one is the reduced-dynamical model, based on the orbit modeling by using the empirical and the pseudo-stochastic parameters. The second approach includes the attitude models and the CNES-developed satellite macromodels, with modeling of non-conservative acceleration, i.e., Sun radiation pressure, Earth radiation pressure and atmospheric drag. Both approaches are used at analysis centers providing DORIS solutions. The reduced-dynamical modeling is currently used by the GOP analysis center, which achieves similar accuracy of the free-network solutions as the other centers utilizing a precise non-conservative force modeling. The GOP works with a modified version of the Bernese GPS Software that has not included the non-conservative modeling. This limitation is now overcome by the new scientific modification of the software, which opens the unique possibility to compare both approaches by using the same software platform. We compare external and internal precision of the estimated orbits. We also analyze the individual satellite free-network DORIS solutions and time-series of derived parameters, i.e., station coordinates, TRF scale, the geocenter variations and the Earth rotation parameters. The studies highlight the main differences in the results that should answer the question whether the modeling of non-conservative forces including the CNES box-wing satellite models actually brings a significant improvement to the DORIS solutions.
NASA Astrophysics Data System (ADS)
Wilusz, D. C.; Harman, C. J.; Ball, W. P.
2014-12-01
Modeling the dynamics of chemical transport from the landscape to streams is necessary for water quality management. Previous work has shown that estimates of the distribution of water age in streams, the transit time distribution (TTD), can improve prediction of the concentration of conservative tracers (i.e., ones that "follow the water") based on upstream watershed inputs. A major challenge however has been accounting for climate and transport variability when estimating TDDs at the catchment scale. In this regard, Harman (2014, in review) proposed the Omega modeling framework capable of using watershed hydraulic fluxes to approximate the time-varying TTD. The approach was previously applied to the Plynlimon research watershed in Wales to simulate stream concentration dynamics of a conservative tracer (chloride) including 1/f attenuation of the power spectra density. In this study we explore the extent to which TTDs estimated by the Omega model vary with the concentration of non-conservative tracers (i.e., ones whose concentrations are also affected by transformations and interactions with other phases). First we test the hypothesis that the TTD calibrated in Plynlimon can explain a large part of the variation in non-conservative stream water constituents associated with storm flow (acidity, Al, DOC, Fe) and base flow (Ca, Si). While controlling for discharge, we show a correlation between the percentage of water of different ages and constituent concentration. Second, we test the hypothesis that TTDs help explain variation in stream nitrate concentration, which is of particular interest for pollution control but can be highly non-conservative. We compare simulation runs from Plynlimon and the agricultural Choptank watershed in Maryland, USA. Following a top-down approach, we estimate nitrate concentration as if it were a conservative tracer and examine the structure of residuals at different temporal resolutions. Finally, we consider model modifications to
Nonextensive models for earthquakes.
Silva, R; França, G S; Vilar, C S; Alcaniz, J S
2006-02-01
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.
Two models for earthquake forerunners
Mjachkin, V.I.; Brace, W.F.; Sobolev, G.A.; Dieterich, J.H.
1975-01-01
Similar precursory phenomena have been observed before earthquakes in the United States, the Soviet Union, Japan, and China. Two quite different physical models are used to explain these phenomena. According to a model developed by US seismologists, the so-called dilatancy diffusion model, the earthquake occurs near maximum stress, following a period of dilatant crack expansion. Diffusion of water in and out of the dilatant volume is required to explain the recovery of seismic velocity before the earthquake. According to a model developed by Soviet scientists growth of cracks is also involved but diffusion of water in and out of the focal region is not required. With this model, the earthquake is assumed to occur during a period of falling stress and recovery of velocity here is due to crack closure as stress relaxes. In general, the dilatancy diffusion model gives a peaked precursor form, whereas the dry model gives a bay form, in which recovery is well under way before the earthquake. A number of field observations should help to distinguish between the two models: study of post-earthquake recovery, time variation of stress and pore pressure in the focal region, the occurrence of pre-existing faults, and any changes in direction of precursory phenomena during the anomalous period. ?? 1975 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze
2016-06-01
The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.
Bayesian kinematic earthquake source models
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
Modeling, Forecasting and Mitigating Extreme Earthquakes
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
Helmstetter, Agnès; Hergarten, Stefan; Sornette, Didier
2004-10-01
Following Phys. Rev. Lett. 88, 238501 (2002)] who discovered aftershocks and foreshocks in the Olami-Feder-Christensen (OFC) discrete block-spring earthquake model, we investigate to what degree the simple toppling mechanism of this model is sufficient to account for the clustering of real seismicity in time and space. We find that synthetic catalogs generated by the OFC model share many properties of real seismicity at a qualitative level: Omori's law (aftershocks) and inverse Omori's law (foreshocks), increase of the number of aftershocks and of the aftershock zone size with the mainshock magnitude. There are, however, significant quantitative differences. The number of aftershocks per mainshock in the OFC model is smaller than in real seismicity, especially for large mainshocks. We find that foreshocks in the OFC catalogs can be in large part described by a simple model of triggered seismicity, such as the epidemic-type aftershock sequence (ETAS) model. But the properties of foreshocks in the OFC model depend on the mainshock magnitude, in qualitative agreement with the critical earthquake model and in disagreement with real seismicity and with the ETAS model.
GEM - The Global Earthquake Model
NASA Astrophysics Data System (ADS)
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
New geological perspectives on earthquake recurrence models
Schwartz, D.P.
1997-02-01
In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release.
Parity nonconservation in ytterbium ion
Sahoo, B. K.; Das, B. P.
2011-07-15
We consider parity nonconservation (PNC) in singly ionized ytterbium (Yb{sup +}) arising from the neutral current weak interaction. We calculate the PNC electric dipole transition amplitude (E1{sub PNC}) and the properties associated with it using relativistic coupled-cluster theory. E1{sub PNC} for the [4f{sup 14}] {sup 2}6s{yields}[4f{sup 14}] {sup 2}5d{sub 3/2} transition in Yb{sup +} has been evaluated to within an accuracy of 5%. The improvement of this result is possible. It therefore appears that this ion is a promising candidate for testing the standard model of particle physics.
Distributed Slip Model for Simulating Virtual Earthquakes
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, S.; Tsesarsky, M.; Gvirtzman, Z.
2014-12-01
We develop a physics based, generic finite fault source, which we call the Distributed Slip Model (DSM) for simulating large virtual earthquakes. This task is a necessary step towards ground motion prediction in earthquake-prone areas with limited instrumental coverage. A reliable ground motion prediction based on virtual earthquakes must account for site, path, and source effects. Assessment of site effect mainly depends on near-surface material properties which are relatively well constrained, using geotechnical site data and borehole measurements. Assessment of path effect depends on the deeper geological structure, which is also typically known to an acceptable resolution. Contrarily to these two effects, which remain constant for a given area of interest, the earthquake rupture process and geometry varies from one earthquake to the other. In this study we focus on a finite fault source representation which is both generic and physics-based, for simulating large earthquakes where limited knowledge is available. Thirteen geometric and kinematic parameters are used to describe the smooth "pseudo-Gaussian" slip distribution, such that slip decays from a point of peak slip within an elliptical rupture patch to zero at the borders of the patch. Radiation pattern and spectral charectaristics of our DSM are compared to those of commonly used finite fault models, i.e., the classical Haskell's Model (HM) and the modified HM with Radial Rupture Propagation (HM-RRP) and the Point Source Model (PSM). Ground motion prediction based on our DSM benefits from the symmetry of the PSM and the directivity of the HM while overcoming inadequacy for modeling large earthquakes of the former and the non-physical uniform slip of the latter.
Asperity Model of an Earthquake - Dynamic Problem
Johnson, Lane R.; Nadeau, Robert M.
2003-05-02
We develop an earthquake asperity model that explains previously determined empirical scaling relationships for repeating earthquakes along the San Andreas fault in central California. The model assumes that motion on the fault is resisted primarily by a patch of small strong asperities that interact with each other to increase the amount of displacement needed to cause failure. This asperity patch is surrounded by a much weaker fault that continually creeps in response to tectonic stress. Extending outward from the asperity patch into the creeping part of the fault is a shadow region where a displacement deficit exists. Starting with these basic concepts, together with the analytical solution for the exterior crack problem, the consideration of incremental changes in the size of the asperity patch leads to differential equations that can be solved to yield a complete static model of an earthquake. Equations for scalar seismic moment, the radius of the asperity patch, and the radius of the displacement shadow are all specified as functions of the displacement deficit that has accumulated on the asperity patch. The model predicts that the repeat time for earthquakes should be proportional to the scalar moment to the 1/6 power, which is in agreement with empirical results for repeating earthquakes. The model has two free parameters, a critical slip distance dc and a scaled radius of a single asperity. Numerical values of 0.20 and 0.17 cm, respectively, for these two parameters will reproduce the empirical results, but this choice is not unique. Assuming that the asperity patches are distributed on the fault surface in a random fractal manner leads to a frequency size distribution of earthquakes that agrees with the Gutenberg Richter formula and a simple relationship between the b-value and the fractal dimension. We also show that the basic features of the theoretical model can be simulated with numerical calculations employing the boundary integral method.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic
On the earthquake predictability of fault interaction models
Marzocchi, W; Melini, D
2014-01-01
Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643
Aftershocks in a frictional earthquake model.
Braun, O M; Tosatti, Erio
2014-09-01
Inspired by spring-block models, we elaborate a "minimal" physical model of earthquakes which reproduces two main empirical seismological laws, the Gutenberg-Richter law and the Omori aftershock law. Our point is to demonstrate that the simultaneous incorporation of aging of contacts in the sliding interface and of elasticity of the sliding plates constitutes the minimal ingredients to account for both laws within the same frictional model. PMID:25314453
Aftershocks in a frictional earthquake model.
Braun, O M; Tosatti, Erio
2014-09-01
Inspired by spring-block models, we elaborate a "minimal" physical model of earthquakes which reproduces two main empirical seismological laws, the Gutenberg-Richter law and the Omori aftershock law. Our point is to demonstrate that the simultaneous incorporation of aging of contacts in the sliding interface and of elasticity of the sliding plates constitutes the minimal ingredients to account for both laws within the same frictional model.
Extreme Earthquake Risk Estimation by Hybrid Modeling
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.
2012-12-01
The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican
Modeling earthquake indexes derived from the earthquake warning system upon the planet earth
NASA Astrophysics Data System (ADS)
Li, Yong
2010-12-01
By studying the correlation between historical earthquake data and the distributional characteristics of parameters of solid earth tides in the earthquake epicenter, we are able to design a forecasting function of earthquake probability. We put forward a design method for the Earthquake Warning System. The model could theoretically simulate and be used to predict the probability of strong earthquakes that could occur anywhere at any time. In addition, the system could also conveniently obtain global or partial Modeling Earthquake Indexes to finally combine the precise pointing prediction and forecast of partial indexes. The literature quotes global data values, provided by NEIC, of 1544 M ⩾ 6.5 earthquakes. It also gives examples of instantaneous earthquake indexes of the whole world and Taiwan Area on 1st January 2010, UT=0:00 and the average earthquake index near the Taiwan Area. According to the 10-year pointing prediction of strong earthquakes in San Francisco, the literature provides the average earthquake index on 24th June 2015 (± 15 days), in its neighborhood.
Modeling coupled avulsion and earthquake timescale dynamics
NASA Astrophysics Data System (ADS)
Reitz, M. D.; Steckler, M. S.; Paola, C.; Seeber, L.
2014-12-01
River avulsions and earthquakes can be hazardous events, and many researchers work to better understand and predict their timescales. Improvements in the understanding of the intrinsic processes of deposition and strain accumulation that lead to these events have resulted in better constraints on the timescales of each process individually. There are however several mechanisms by which these two systems may plausibly become linked. River deposition and avulsion can affect the stress on underlying faults through differential loading by sediment or water. Conversely, earthquakes can affect river avulsion patterns through altering the topography. These interactions may alter the event recurrence timescales, but this dynamic has not yet been explored. We present results of a simple numerical model, in which two systems have intrinsic rates of approach to failure thresholds, but the state of one system contributes to the other's approach to failure through coupling functions. The model is first explored for the simplest case of two linear approaches to failure, and linearly proportional coupling terms. Intriguing coupling dynamics emerge: the system settles into cycles of repeating earthquake and avulsion timescales, which are approached at an exponential decay rate that depends on the coupling terms. The ratio of the number of events of each type and the timescale values also depend on the coupling coefficients and the threshold values. We then adapt the model to a more complex and realistic scenario, in which a river avulses between either side of a fault, with parameters corresponding to the Brahmaputra River / Dauki fault system in Bangladesh. Here the tectonic activity alters the topography by gradually subsiding during the interseismic time, and abruptly increasing during an earthquake. The river strengthens the fault by sediment loading when in one path, and weakens it when in the other. We show this coupling can significantly affect earthquake and avulsion
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...
Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.
2011-01-01
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355
Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F
2011-10-01
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
Human casualties in earthquakes: modelling and mitigation
Spence, R.J.S.; So, E.K.M.
2011-01-01
Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.
Modeling Statistical and Dynamic Features of Earthquakes
NASA Astrophysics Data System (ADS)
Rydelek, P. A.; Suyehiro, K.; Sacks, S. I.; Smith, D. E.; Takanami, T.; Hatano, T.
2015-12-01
The cellular automaton earthquake model by Sacks and Rydelek (1995) is extended to explain spatio-temporal change in seismicity with the regional tectonic stress buildup. Our approach is to apply a simple Coulomb failure law to our model space of discrete cells, which successfully reproduces empirical laws (e.g. Gutenberg-Richter law) and dynamic failure characteristics (e.g. stress drop vs. magnitude and asperities) of earthquakes. Once the stress condition supersedes the Coulomb threshold on a discrete cell, its accumulated stress is transferred to only neighboring cells, which cascades to more neighboring cells to create various size ruptures. A fundamental point here is the cellular view of the continuous earth. We suggest the cell size varies regionally with the maturity of the faults of the region. Seismic gaps (e.g. Mogi, 1979) and changes in seismicity such as indicated by b-values have been known but poorly understood. There have been reports of magnitude dependent seismic quiescence before large event at plate boundaries and intraplate (Smith et al., 2013). Recently, decreases in b-value for large earthquakes have been reported (Nanjo et al., 2012) as anticipated from lab experiments (Mogi, 1963). Our model reproduces the b-value decrease towards eventual large earthquake (increasing tectonic stress and its heterogeneous distribution). We succeeded in reproducing the cut-off of larger events above some threshold magnitude (M3-4) by slightly increasing the Coulomb failure level for only 2 % or more of the highly stressed cells. This is equivalent to reducing the pore pressure in these distributed cells. We are working on the model to introduce the recovery of pore pressure incorporating the observed orders of magnitude higher permeability fault zones than the surrounding rock (Lockner, 2009) allowing for a large earthquake to be generated. Our interpretation requires interactions of pores and fluids. We suggest heterogeneously distributed patches hardened
Foreshock and Aftershocks in Simple Earthquake Models
NASA Astrophysics Data System (ADS)
Kazemian, J.; Tiampo, K. F.; Klein, W.; Dominguez, R.
2015-02-01
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
Modeling Broadband motions from the Tohoku earthquake
NASA Astrophysics Data System (ADS)
Li, D.; Chu, R.; Graves, R. W.; Helmberger, D. V.; Clayton, R. W.
2011-12-01
The 2011 M9 Tohoku earthquake produced an extraordinary dataset of over 2000 broadband regional and teleseismic records. While considerable progress has been made in modeling the longer period (>3 s) waveforms, the shorter periods (1-3 s) prove more difficult. Since modeling high frequency waveforms in 3D is computationally expensive, we follow the approach proposed by Helmberger and Vidale (1988), which interfaces the Cagniard-de Hoop analytical source description with a 2D numerical code to account for earthquake radiation patterns. We extend this method to a staggered grid finite difference code, which is stable in the presence of water. The code adapts the Convolutional PML boundary condition, and uses the "following the wavefront" technique and multiple GPUs, which significantly reduces computing time. We test our method against existing 1D and 3D codes, and examine the effects of slab structure, ocean bathymetry and local basins in an attempt to better explain the observed shorter period response.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
ERIC Educational Resources Information Center
Walter, Edward J.
1977-01-01
Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)
ERIC Educational Resources Information Center
Pakiser, Louis C.
One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…
Physical and stochastic models of earthquake clustering
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Murru, Maura; Catalli, Flaminia
2006-04-01
The phenomenon of earthquake clustering, i.e., the increase of occurrence probability for seismic events close in space and time to other previous earthquakes, has been modeled both by statistical and physical processes. From a statistical viewpoint the so-called epidemic model (ETAS) introduced by Ogata in 1988 and its variations have become fairly well known in the seismological community. Tests on real seismicity and comparison with a plain time-independent Poissonian model through likelihood-based methods have reliably proved their validity. On the other hand, in the last decade many papers have been published on the so-called Coulomb stress change principle, based on the theory of elasticity, showing qualitatively that an increase of the Coulomb stress in a given area is usually associated with an increase of seismic activity. More specifically, the rate-and-state theory developed by Dieterich in the '90s has been able to give a physical justification to the phenomenon known as Omori law. According to this law, a mainshock is followed by a series of aftershocks whose frequency decreases in time as an inverse power law. In this study we give an outline of the above-mentioned stochastic and physical models, and build up an approach by which these models can be merged in a single algorithm and statistically tested. The application to the seismicity of Japan from 1970 to 2003 shows that the new model incorporating the physical concept of the rate-and-state theory performs not worse than the purely stochastic model with two free parameters only. The numerical results obtained in these applications are related to physical characters of the model as the stress change produced by an earthquake close to its edges and to the A and σ parameters of the rate-and-state constitutive law.
The failure of earthquake failure models
Gomberg, J.
2001-01-01
In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
Conservative perturbation theory for nonconservative systems
NASA Astrophysics Data System (ADS)
Shah, Tirth; Chattopadhyay, Rohitashwa; Vaidya, Kedar; Chakraborty, Sagar
2015-12-01
In this paper, we show how to use canonical perturbation theory for dissipative dynamical systems capable of showing limit-cycle oscillations. Thus, our work surmounts the hitherto perceived barrier for canonical perturbation theory that it can be applied only to a class of conservative systems, viz., Hamiltonian systems. In the process, we also find Hamiltonian structure for an important subset of Liénard system—a paradigmatic system for modeling isolated and asymptotic oscillatory state. We discuss the possibility of extending our method to encompass an even wider range of nonconservative systems.
Conservative perturbation theory for nonconservative systems.
Shah, Tirth; Chattopadhyay, Rohitashwa; Vaidya, Kedar; Chakraborty, Sagar
2015-12-01
In this paper, we show how to use canonical perturbation theory for dissipative dynamical systems capable of showing limit-cycle oscillations. Thus, our work surmounts the hitherto perceived barrier for canonical perturbation theory that it can be applied only to a class of conservative systems, viz., Hamiltonian systems. In the process, we also find Hamiltonian structure for an important subset of Liénard system-a paradigmatic system for modeling isolated and asymptotic oscillatory state. We discuss the possibility of extending our method to encompass an even wider range of nonconservative systems. PMID:26764794
Classical mechanics of nonconservative systems.
Galley, Chad R
2013-04-26
Hamilton's principle of stationary action lies at the foundation of theoretical physics and is applied in many other disciplines from pure mathematics to economics. Despite its utility, Hamilton's principle has a subtle pitfall that often goes unnoticed in physics: it is formulated as a boundary value problem in time but is used to derive equations of motion that are solved with initial data. This subtlety can have undesirable effects. I present a formulation of Hamilton's principle that is compatible with initial value problems. Remarkably, this leads to a natural formulation for the Lagrangian and Hamiltonian dynamics of generic nonconservative systems, thereby filling a long-standing gap in classical mechanics. Thus, dissipative effects, for example, can be studied with new tools that may have applications in a variety of disciplines. The new formalism is demonstrated by two examples of nonconservative systems: an object moving in a fluid with viscous drag forces and a harmonic oscillator coupled to a dissipative environment. PMID:23679733
NASA Astrophysics Data System (ADS)
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
Towards Modelling slow Earthquakes with Geodynamics
NASA Astrophysics Data System (ADS)
Regenauer-Lieb, K.; Yuen, D. A.
2006-12-01
We explore a new, properly scaled, thermal-mechanical geodynamic model{^1} that can generate timescales now very close to those of earthquakes and of the same order as slow earthquakes. In our simulations we encounter two basically different bifurcation phenomena. One in which the shear zone nucleates in the ductile field, and the second which is fully associated with elasto-plastic (brittle, pressure- dependent) displacements. A quartz/feldspar composite slab has all two modes operating simultaneously in three different depth levels. The bottom of the crust is predominantly controlled by the elasto-visco-plastic mode while the top is controlled by the elasto-plastic mode. The exchange of the two modes appears to communicate on a sub-horizontal layer in a flip-flop fashion, which may yield a fractal-like signature in time and collapses into a critical temperature which for crustal rocks is around 500-580 K; in the middle of the brittle-ductile transition-zone. Near the critical temperature, stresses close to the ideal strength can be reached at local, meter-scale. Investigations of the thermal-mechanical properties under such extreme conditions are pivotal for understanding the physics of earthquakes. 1. Regenauer-Lieb, K., Weinberg, R. & Rosenbaum, G. The effect of energy feedbacks on continental strength. Nature 442, 67-70 (2006).
Quasiperiodic Events in an Earthquake Model
Ramos, O.; Maaloey, K.J.; Altshuler, E.
2006-03-10
We introduce a modification of the Olami-Feder-Christensen earthquake model [Phys. Rev. Lett. 68, 1244 (1992)] in order to improve the resemblence with the Burridge-Knopoff mechanical model and with possible laboratory experiments. A constant and finite force continually drives the system, resulting in instantaneous relaxations. Dynamical disorder is added to the thresholds following a narrow distribution. We find quasiperiodic behavior in the avalanche time series with a period proportional to the degree of dissipation of the system. Periodicity is not as robust as criticality when the threshold force distribution widens, or when an increasing noise is introduced in the values of the dissipation.
ERIC Educational Resources Information Center
Roper, Paul J.; Roper, Jere Gerard
1974-01-01
Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)
NASA Technical Reports Server (NTRS)
April, G. C.; Liu, H. A.
1975-01-01
Total coliform group bacteria were selected to expand the mathematical modeling capabilities of the hydrodynamic and salinity models to understand their relationship to commercial fishing ventures within bay waters and to gain a clear insight into the effect that rivers draining into the bay have on water quality conditions. Parametric observations revealed that temperature factors and river flow rate have a pronounced effect on the concentration profiles, while wind conditions showed only slight effects. An examination of coliform group loading concentrations at constant river flow rates and temperature shows these loading changes have an appreciable influence on total coliform distribution within Mobile Bay.
NASA Astrophysics Data System (ADS)
Evje, Steinar; Wang, Wenjun; Wen, Huanyao
2016-09-01
In this paper, we consider a compressible two-fluid model with constant viscosity coefficients and unequal pressure functions {P^+neq P^-}. As mentioned in the seminal work by Bresch, Desjardins, et al. (Arch Rational Mech Anal 196:599-629, 2010) for the compressible two-fluid model, where {P^+=P^-} (common pressure) is used and capillarity effects are accounted for in terms of a third-order derivative of density, the case of constant viscosity coefficients cannot be handled in their settings. Besides, their analysis relies on a special choice for the density-dependent viscosity [refer also to another reference (Commun Math Phys 309:737-755, 2012) by Bresch, Huang and Li for a study of the same model in one dimension but without capillarity effects]. In this work, we obtain the global solution and its optimal decay rate (in time) with constant viscosity coefficients and some smallness assumptions. In particular, capillary pressure is taken into account in the sense that {Δ P=P^+ - P^-=fneq 0} where the difference function {f} is assumed to be a strictly decreasing function near the equilibrium relative to the fluid corresponding to {P^-}. This assumption plays an key role in the analysis and appears to have an essential stabilization effect on the model in question.
Parity nonconservation in hydrogen.
Dunford, R. W.; Holt, R. J.
2011-01-01
We discuss the prospects for parity violation experiments in atomic hydrogen and deuterium to contribute to testing the Standard Model (SM). We find that, if parity experiments in hydrogen can be done, they remain highly desirable because there is negligible atomic-physics uncertainty and low energy tests of weak neutral current interactions are needed to probe for new physics beyond the SM. Analysis of a generic APV experiment in deuterium indicates that a 0.3% measurement of C{sub 1D} requires development of a slow (77K) metastable beam of {approx} 5 x 10{sup 14}D(2S)s{sup -1} per hyperfine component. The advent of UV radiation from free electron laser (FEL) technology could allow production of such a beam.
Radiation reaction as a non-conservative force
NASA Astrophysics Data System (ADS)
Aashish, Sandeep; Haque, Asrarul
2016-09-01
We study a system of a finite size charged particle interacting with a radiation field by exploiting Hamilton’s principle for a non-conservative system recently introduced by Galley [1]. This formulation leads to the equation of motion of the charged particle that turns out to be the same as that obtained by Jackson [2]. We show that the radiation reaction stems from the non-conservative part of the effective action for a charged particle. We notice that a charge interacting with a radiation field modeled as a heat bath affords a way to justify that the radiation reaction is a non-conservative force. The topic is suitable for graduate courses on advanced electrodynamics and classical theory of fields.
Noise associated with nonconservative forces in optical traps
NASA Astrophysics Data System (ADS)
de Messieres, Michel; Denesyuk, Natalia A.; La Porta, Arthur
2011-09-01
It is known that for a particle held in an optical trap the interaction of thermal fluctuations with a nonconservative scattering force can cause a persistent nonequilibrium probability flux in the particle position. We investigate position fluctuations associated with this nonequilibrium flux analytically and through simulation. We introduce a model which reproduces the nonequilibrium effects, and in which the magnitude of additional position fluctuations can be calculated in closed form. The ratio of additional nonconservative fluctuations to direct thermal fluctuations scales inversely with the square root of trap power, and is small for typical experimental parameters. In a simulated biophysical experiment the nonconservative scattering force does not significantly increase the observed fluctuations in the length of a double-stranded DNA tether.
Jaiswal, Kishor; Wald, David J.; Earle, Paul; Porter, Keith A.; Hearne, Mike
2011-01-01
Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.
Future WGCEP Models and the Need for Earthquake Simulators
NASA Astrophysics Data System (ADS)
Field, E. H.
2008-12-01
The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).
Modelling the elements of country vulnerability to earthquake disasters.
Asef, M R
2008-09-01
Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.
Foreshocks and Aftershocks in Simple Earthquake Models
NASA Astrophysics Data System (ADS)
Tiampo, K. F.; Klein, W.; Dominguez, R.; Kazemian, J.; González, P. J.
2014-12-01
Natural earthquake fault systems are highly heterogeneous in space; inhomogeneities occur because the earth is made of a variety of materials of different strengths and dissipate stress differently. Because the spatial arrangement of these materials is dependent on the geologic history, the distribution of these various materials can be quite complex and occur over a wide range of length scales. Despite their inhomogeneous nature, real faults are often modeled as spatially homogeneous systems. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen (OFC) and Rundle-Jackson-Brown (RJB) cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or 'asperity cells', into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics those seen in natural fault systems. We observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a mainshock that is followed by a tail of decreasing activity (aftershocks). These recurrent large events occur at regular intervals, as is often observed in historic seismicity, and the time between events and their magnitude are a function of the stress dissipation parameter. The relative length of the foreshock to aftershock sequence depends on the amount of stress dissipation in the system, resulting in relatively long aftershock sequences when the stress dissipation is large versus long foreshock sequences when the stress dissipation is weak. This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism. We find that
NASA Astrophysics Data System (ADS)
Hovius, Niels; Marc, Odin; Meunier, Patrick
2016-04-01
Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may
First Results of the Regional Earthquake Likelihood Models Experiment
Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.
2010-01-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).
Tullis, T E
1996-01-01
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668
Hutchings, L.
1992-01-01
This report outlines a method of using empirical Green's functions in an earthquake simulation program EMPSYN that provides realistic seismograms from potential earthquakes. The theory for using empirical Green's functions is developed, implementation of the theory in EMPSYN is outlined, and an example is presented where EMPSYN is used to synthesize observed records from the 1971 San Fernando earthquake. To provide useful synthetic ground motion data from potential earthquakes, synthetic seismograms should model frequencies from 0.5 to 15.0 Hz, the full wave-train energy distribution, and absolute amplitudes. However, high-frequency arrivals are stochastically dependent upon the inhomogeneous geologic structure and irregular fault rupture. The fault rupture can be modeled, but the stochastic nature of faulting is largely an unknown factor in the earthquake process. The effect of inhomogeneous geology can readily be incorporated into synthetic seismograms by using small earthquakes to obtain empirical Green's functions. Small earthquakes with source corner frequencies higher than the site recording limit f{sub max}, or much higher than the frequency of interest, effectively have impulsive point-fault dislocation sources, and their recordings are used as empirical Green's functions. Since empirical Green's functions are actual recordings at a site, they include the effects on seismic waves from all geologic inhomogeneities and include all recordable frequencies, absolute amplitudes, and all phases. They scale only in amplitude with differences in seismic moment. They can provide nearly the exact integrand to the representation relation. Furthermore, since their source events have spatial extent, they can be summed to simulate fault rupture without loss of information, thereby potentially computing the exact representation relation for an extended source earthquake.
FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.
Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.
1985-01-01
The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
ERIC Educational Resources Information Center
Hernandez, Hildo
2000-01-01
Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)
Shedlock, Kaye M.; Pakiser, Louis Charles
1998-01-01
One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.
Gershenzon, N I; Bykov, V G; Bambakidis, G
2009-05-01
The one-dimensional Frenkel-Kontorova (FK) model, well known from the theory of dislocations in crystal materials, is applied to the simulation of the process of nonelastic stress propagation along transform faults. Dynamic parameters of plate boundary earthquakes as well as slow earthquakes and afterslip are quantitatively described, including propagation velocity along the strike, plate boundary velocity during and after the strike, stress drop, displacement, extent of the rupture zone, and spatiotemporal distribution of stress and strain. The three fundamental speeds of plate movement, earthquake migration, and seismic waves are shown to be connected in framework of the continuum FK model. The magnitude of the strain wave velocity is a strong (almost exponential) function of accumulated stress or strain. It changes from a few km/s during earthquakes to a few dozen km per day, month, or year during afterslip and interearthquake periods. Results of the earthquake parameter calculation based on real data are in reasonable agreement with measured values. The distributions of aftershocks in this model are consistent with the Omori law for temporal distribution and a 1/r for the spatial distributions.
Earthquake locations and seismic velocity models for Southern California
NASA Astrophysics Data System (ADS)
Lin, Guoqing
Earthquake locations are fundamental to studies of earthquake physics, fault orientations and Earth's deformation. Improving earthquake location accuracy has been an important goal branch in seismology for the past few decades. In this dissertation, I consider several methods to improve both relative and absolute earthquake locations. Chapter 2 is devoted to the comparison of different relative earthquake location techniques based on synthetic data, including the double-difference and source-specific station term (SSST) method. The shrinking box SSST algorithm not only provides similar improvements in relative earthquake locations compared to other techniques, but also improves absolute location accuracy compared to the simple SSST method. Chapter 4 describes and documents the COMPLOC software package for implementing the shrinking box SSST algorithm. Chapter 3 shows how absolute locations for quarry seismicity can be obtained by using remote sensing data, which is useful in providing absolute reference locations for three-dimensional velocity inversions and to constrain the shallow crustal structure in simultaneous earthquake location and velocity inversions. Chapter 5 presents and tests a method to estimate local Vp/Vs ratios for compact similar earthquake clusters using the precise P- and S-differential times obtained using waveform cross-correlation. Chapter 6 describes a new three-dimensional seismic velocity model for southern California obtained using the "composite event method" applied to the SIMULPS tomographic inversion algorithm. Based on this velocity model and waveform cross-correlation, Chapter 7 describes how a new earthquake location catalog is obtained for about 450,000 southern California earthquakes between 1981 and 2005.
Earthquake research: Premonitory models and the physics of crustal distortion
NASA Technical Reports Server (NTRS)
Whitcomb, J. H.
1981-01-01
Seismic, gravity, and electrical resistivity data, believed to be most relevent to development of earthquake premonitory models of the crust, are presented. Magnetotellurics (MT) are discussed. Radon investigations are reviewed.
Earthquake Forecasting in Northeast India using Energy Blocked Model
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, D. K.
2009-12-01
In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes
Toward a comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.
Earthquake recurrence models fail when earthquakes fail to reset the stress field
Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.
2012-01-01
Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.
Dynamics of a nonconserving Davydov monomer
NASA Astrophysics Data System (ADS)
Silva, P. A. S.; Cruzeiro, L.
2006-08-01
The Davydov-Scott model describes the transfer of energy along hydrogen-bonded chains, like those that stabilize the structure of α helices. It is based on the hypothesis that amide I excitations are created (by the hydrolysis of ATP, for instance) and kept in the system. Recent experimental results confirm that the energy associated with amide I excitations does indeed last for tens of picoseconds in proteins and model systems. However, the Davydov-Scott model cannot describe the conversion of that energy into work, because it conserves the number of excitations. With the aim of describing conformational changes, we consider, in this paper, a nonconserving generalization of the model, which is found to describe essentially a contraction of the hydrogen bond adjacent to the site where an excitation is present. Unlike the one-site Davydov-Scott model, that contraction is time dependent because the number of excitations is not conserved. However, considering the time average of the dynamical variables, the results reported here tend to the known results of the Davydov-Scott model.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Modeling earthquake activity using a memristor-based cellular grid
NASA Astrophysics Data System (ADS)
Vourkas, Ioannis; Sirakoulis, Georgios Ch.
2013-04-01
Earthquakes are absolutely among the most devastating natural phenomena because of their immediate and long-term severe consequences. Earthquake activity modeling, especially in areas known to experience frequent large earthquakes, could lead to improvements in infrastructure development that will prevent possible loss of lives and property damage. An earthquake process is inherently a nonlinear complex system and lately scientists have become interested in finding possible analogues of earthquake dynamics. The majority of the models developed so far were based on a mass-spring model of either one or two dimensions. An early approach towards the reordering and the improvement of existing models presenting the capacitor-inductor (LC) analogue, where the LC circuit resembles a mass-spring system and simulates earthquake activity, was also published recently. Electromagnetic oscillation occurs when energy is transferred between the capacitor and the inductor. This energy transformation is similar to the mechanical oscillation that takes place in the mass-spring system. A few years ago memristor-based oscillators were used as learning circuits exposed to a train of voltage pulses that mimic environment changes. The mathematical foundation of the memristor (memory resistor), as the fourth fundamental passive element, has been expounded by Leon Chua and later extended to a more broad class of memristors, known as memristive devices and systems. This class of two-terminal passive circuit elements with memory performs both information processing and storing of computational data on the same physical platform. Importantly, the states of these devices adjust to input signals and provide analog capabilities unavailable in standard circuit elements, resulting in adaptive circuitry and providing analog parallel computation. In this work, a memristor-based cellular grid is used to model earthquake activity. An LC contour along with a memristor is used to model seismic activity
Tensile earthquakes: Theory, modeling, and inversion
NASA Astrophysics Data System (ADS)
VavryčUk, VáClav
2011-12-01
Tensile earthquakes are earthquakes which combine shear and tensile motions on a fault during the rupture process. The geometry of faulting is described by four angles: strike, dip, rake, and slope. The strike, dip, and rake define the orientation of the fault normal and the tangential component of the dislocation vector along the fault. The slope defines the deviation of the dislocation vector from the fault. The strike, dip, and rake are determined ambiguously from moment tensors similarly as for shear sources. The slope is determined uniquely and has the same value for both complementary solutions. The moment tensors of tensile earthquakes are characterized by significant non-double-couple (non-DC) components comprising both the compensated linear vector dipole (CLVD) and the isotropic (ISO) components. In isotropic media, the CLVD and ISO percentages should have the same sign and should depend linearly for earthquakes that occurred in the same focal area. The direction of the linear function between the CLVD and ISO defines the velocity ratio νP/νS in the focal area. The parameters of tensile earthquakes can be retrieved from their moment tensors. The procedure yields the angles describing the geometry of faulting as well as the νP/νS ratio in the focal area. The accuracy of the νP/νS ratio can be increased if a set of moment tensors of earthquakes that occurred in the same focal area is analyzed. The calculation of the νP/νS ratio from moment tensors is an auspicious method which might find applications in tomography of the focal area or in monitoring fluid flow during seismic activity. If the νP/νS ratio is found and well constrained, the parameters of tensile earthquakes can be inverted directly from observed data using a constrained nonlinear inversion. In this inversion, the parameter space can be limited by fixing the νP/νS ratio or forcing the νP/νS ratio to lie within some physically reasonable limits.
Retrospective tests of hybrid operational earthquake forecasting models for Canterbury
NASA Astrophysics Data System (ADS)
Rhoades, D. A.; Liukis, M.; Christophersen, A.; Gerstenberger, M. C.
2016-01-01
The Canterbury, New Zealand, earthquake sequence, which began in September 2010, occurred in a region of low crustal deformation and previously low seismicity. Because, the ensuing seismicity in the region is likely to remain above previous levels for many years, a hybrid operational earthquake forecasting model for Canterbury was developed to inform decisions on building standards and urban planning for the rebuilding of Christchurch. The model estimates occurrence probabilities for magnitudes M ≥ 5.0 in the Canterbury region for each of the next 50 yr. It combines two short-term, two medium-term and four long-term forecasting models. The weight accorded to each individual model in the operational hybrid was determined by an expert elicitation process. A retrospective test of the operational hybrid model and of an earlier informally developed hybrid model in the whole New Zealand region has been carried out. The individual and hybrid models were installed in the New Zealand Earthquake Forecast Testing Centre and used to make retrospective annual forecasts of earthquakes with magnitude M > 4.95 from 1986 on, for time-lags up to 25 yr. All models underpredict the number of earthquakes due to an abnormally large number of earthquakes in the testing period since 2008 compared to those in the learning period. However, the operational hybrid model is more informative than any of the individual time-varying models for nearly all time-lags. Its information gain relative to a reference model of least information decreases as the time-lag increases to become zero at a time-lag of about 20 yr. An optimal hybrid model with the same mathematical form as the operational hybrid model was computed for each time-lag from the 26-yr test period. The time-varying component of the optimal hybrid is dominated by the medium-term models for time-lags up to 12 yr and has hardly any impact on the optimal hybrid model for greater time-lags. The optimal hybrid model is considerably more
An empirical model for global earthquake fatality estimation
Jaiswal, Kishor; Wald, David
2010-01-01
We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331
Slimplectic Integrators: Variational Integrators for Nonconservative systems
NASA Astrophysics Data System (ADS)
Tsang, David
2016-05-01
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. Here we present the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to a newly developed principle of stationary nonconservative action (Galley, 2013, Galley et al 2014). As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
Slimplectic Integrators: Variational Integrators for Nonconservative systems
NASA Astrophysics Data System (ADS)
Tsang, David
2016-01-01
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the "slimplectic" integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting-Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
Non-conservative mass transfers in Algols
NASA Astrophysics Data System (ADS)
Erdem, A.; Öztürk, O.
2014-06-01
We applied a revised model for non-conservative mass transfer in semi-detached binaries to 18 Algol-type binaries showing orbital period increase or decrease in their parabolic O-C diagrams. The combined effect of mass transfer and magnetic braking due to stellar wind was considered when interpreting the orbital period changes of these 18 Algols. Mass transfer was found to be the dominant mechanism for the increase in orbital period of 10 Algols (AM Aur, RX Cas, DK Peg, RV Per, WX Sgr, RZ Sct, BS Sct, W Ser, BD Vir, XZ Vul) while magnetic braking appears to be the responsible mechanism for the decrease in that of 8 Algols (FK Aql, S Cnc, RU Cnc, TU Cnc, SX Cas, TW Cas, V548 Cyg, RY Gem). The peculiar behaviour of orbital period changes in three W Ser-type binary systems (W Ser, itself a prototype, RX Cas and SX Cas) is discussed. The empirical linear relation between orbital period (P) and its rate of change (dP/dt) was also revised.
Assessing a 3D smoothed seismicity model of induced earthquakes
NASA Astrophysics Data System (ADS)
Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan
2016-04-01
As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.
Scaling of earthquake models with inhomogeneous stress dissipation.
Dominguez, Rachele; Tiampo, Kristy; Serino, C A; Klein, W
2013-02-01
Natural earthquake fault systems are highly nonhomogeneous. The inhomogeneities occur because the earth is made of a variety of materials which hold and dissipate stress differently. In this work, we study scaling in earthquake fault models which are variations of the Olami-Feder-Christensen and Rundle-Jackson-Brown models. We use the scaling to explore the effect of spatial inhomogeneities due to damage and inhomogeneous stress dissipation in the earthquake-fault-like systems when the stress transfer range is long, but not necessarily longer than the length scale associated with the inhomogeneities of the system. We find that the scaling depends not only on the amount of damage, but also on the spatial distribution of that damage.
Earthquake nucleation mechanisms and periodic loading: Models, Experiments, and Observations
NASA Astrophysics Data System (ADS)
Dahmen, K.; Brinkman, B.; Tsekenis, G.; Ben-Zion, Y.; Uhl, J.
2010-12-01
The project has two main goals: (a) Improve the understanding of how earthquakes are nucleated ¬ with specific focus on seismic response to periodic stresses (such as tidal or seasonal variations) (b) Use the results of (a) to infer on the possible existence of precursory activity before large earthquakes. A number of mechanisms have been proposed for the nucleation of earthquakes, including frictional nucleation (Dieterich 1987) and fracture (Lockner 1999, Beeler 2003). We study the relation between the observed rates of triggered seismicity, the period and amplitude of cyclic loadings and whether the observed seismic activity in response to periodic stresses can be used to identify the correct nucleation mechanism (or combination of mechanisms). A generalized version of the Ben-Zion and Rice model for disordered fault zones and results from related recent studies on dislocation dynamics and magnetization avalanches in slowly magnetized materials are used in the analysis (Ben-Zion et al. 2010; Dahmen et al. 2009). The analysis makes predictions for the statistics of macroscopic failure events of sheared materials in the presence of added cyclic loading, as a function of the period, amplitude, and noise in the system. The employed tools include analytical methods from statistical physics, the theory of phase transitions, and numerical simulations. The results will be compared to laboratory experiments and observations. References: Beeler, N.M., D.A. Lockner (2003). Why earthquakes correlate weakly with the solid Earth tides: effects of periodic stress on the rate and probability of earthquake occurrence. J. Geophys. Res.-Solid Earth 108, 2391-2407. Ben-Zion, Y. (2008). Collective Behavior of Earthquakes and Faults: Continuum-Discrete Transitions, Evolutionary Changes and Corresponding Dynamic Regimes, Rev. Geophysics, 46, RG4006, doi:10.1029/2008RG000260. Ben-Zion, Y., Dahmen, K. A. and J. T. Uhl (2010). A unifying phase diagram for the dynamics of sheared solids
The Common Forces: Conservative or Nonconservative?
ERIC Educational Resources Information Center
Keeports, David
2006-01-01
Of the forces commonly encountered when solving problems in Newtonian mechanics, introductory texts usually limit illustrations of the definitions of conservative and nonconservative forces to gravity, spring forces, kinetic friction and fluid resistance. However, at the expense of very little class time, the question of whether each of the common…
Justification of a "Crucial" Experiment: Parity Nonconservation.
ERIC Educational Resources Information Center
Franklin, Allan; Smokler, Howard
1981-01-01
Presents history, nature of evidence evaluated, and philosophical questions to justify the view that experiments on parity nonconservation were "crucial" experiments in the sense that they decided unambiguously and within a short period of time for the appropriate scientific community, between two or more competing theories or classes of theories.…
Dynamic models of an earthquake and tsunami offshore Ventura, California
Kenny J. Ryan,; Geist, Eric L.; Barall, Michael; David D. Oglesby,
2015-01-01
The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.
NASA Astrophysics Data System (ADS)
Setyonegoro, W.
2016-05-01
Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.
Combined GPS and InSAR models of postseismic deformation from the Northridge Earthquake
NASA Technical Reports Server (NTRS)
Donnellan, A.; Parker, J. W.; Peltzer, G.
2002-01-01
Models of combined Global Positioning System and Interferometric Synthetic Aperture Radar data collected in the region of the Northridge earthquake indicate that significant afterslip on the main fault occurred following the earthquake.
Dynamics of earthquake faulting: Two-dimensional lattice model
NASA Astrophysics Data System (ADS)
Shi, Baoping
I present a computer simulation investigation of the dynamics of earthquake faulting and associated ground motion by using numerical methods. The major goal is to increase our understanding of the earthquake dynamic rupture process with associated stick-slip motion accompanied by fault opening. I particularly focus on the rupture mechanism that affects the rupture propagation and the change of shear stress at which it radiates seismic energy. To help interpret numerical results, I discuss several earthquake faulting models of dynamic rupture and compare their results with what is actually observed experimentally from the foam rubber experiment. I start with a review of previous research work, concentrating on physical experimental results and numerical results. I then review the numerical method of a lattice model in investigating the fracture mechanics which addresses the dynamic behavior regarding the lattice properties of an elastic solid. Next, I present my numerical characteristics of a dynamic rupture in the earthquake faulting process. The dynamic rupture process is interpreted within the combined framework of dynamic systems, non-linear elasticity, and numerical simulation. I conclude by investigating the importance of the fault's geometrical effect and by studying the rupture pulse propagation during stick-slip motion. The dissertation ends with recommendations for future research.
NASA Astrophysics Data System (ADS)
Zschau, J.
2009-04-01
Earthquake risk, like natural risks in general, has become a highly dynamic and globally interdependent phenomenon. Due to the "urban explosion" in the Third World, an increasingly complex cross linking of critical infrastructure and lifelines in the industrial nations and a growing globalisation of the world's economies, we are presently facing a dramatic increase of our society's vulnerability to earthquakes in practically all seismic regions on our globe. Such fast and global changes cannot be captured with conventional earthquake risk models anymore. The sciences in this field are, therefore, asked to come up with new solutions that are no longer exclusively aiming at the best possible quantification of the present risks but also keep an eye on their changes with time and allow to project these into the future. This does not apply to the vulnerablity component of earthquake risk alone, but also to its hazard component which has been realized to be time-dependent, too. The challenges of earthquake risk dynamics and -globalisation have recently been accepted by the Global Science Forum of the Organisation for Economic Co-operation and Development (OECD - GSF) who initiated the "Global Earthquake Model (GEM)", a public-private partnership for establishing an independent standard to calculate, monitor and communicate earthquake risk globally, raise awareness and promote mitigation.
Renormalized dissipation in the nonconservatively forced Burgers equation
Krommes, J.A.
2000-01-19
A previous calculation of the renormalized dissipation in the nonconservatively forced one-dimensional Burgers equation, which encountered a catastrophic long-wavelength divergence approximately [k min]-3, is reconsidered. In the absence of velocity shear, analysis of the eddy-damped quasi-normal Markovian closure predicts only a benign logarithmic dependence on kmin. The original divergence is traced to an inconsistent resonance-broadening type of diffusive approximation, which fails in the present problem. Ballistic scaling of renormalized pulses is retained, but such scaling does not, by itself, imply a paradigm of self-organized criticality. An improved scaling formula for a model with velocity shear is also given.
Desk-top model buildings for dynamic earthquake response demonstrations
Brady, A. Gerald
1992-01-01
Models of buildings that illustrate dynamic resonance behavior when excited by hand are designed and built. Two types of buildings are considered, one with columns stronger than floors, the other with columns weaker than floors. Combinations and variations of these two types are possible. Floor masses and column stiffnesses are chosen in order that the frequency of the second mode is approximately five cycles per second, so that first and second modes can be excited manually. The models are expected to be resonated by hand by schoolchildren or persons unfamiliar with the dynamic resonant response of tall buildings, to gain an understanding of structural behavior during earthquakes. Among other things, this experience will develop a level of confidence in the builder and experimenter should they be in a high-rise building during an earthquake, sensing both these resonances and other violent shaking.
Modeling of features of slow earthquakes in a dynamical framework
NASA Astrophysics Data System (ADS)
Yamashita, T.
2010-12-01
Slow earthquakes exhibit a striking contrast with ordinary earthquakes. Rupture speeds of slow slip events are four orders of magnitude smaller than those of ordinary earthquakes. Ide et al.(2007) found that seismic moment of slow earthquakes is linearly proportional to the characteristic duration, which is different from the relation for ordinary earthquakes. It is also known that slow slip events are frequently coupled with tremor. We now simulate such features of slow earthquakes on the basis of fault model developed by Suzuki and Yamashita (2009, 2010). Key ingredients of the model are the fluid flow, shear heating and inelastic pore creation. We assume a fault in a thermoporoelastic medium saturated with fluid. The inelastic porosity is assumed to increase with evolving slip. The shear heating builds up the fluid pressure on the fault, whereas the pore creation lowers it. Since the slip is promoted by high fluid pressure according to the Coulomb law of friction, the relative dominance of these two effects determines the nature of slip. Our 1D analysis showed that slip-weakening and -strengthening emerge in the ranges Su < -P0 and Su > -P0 (Suzuki and Yamashita, 2010); shear heating and pore creation are dominant in the former and latter ranges. Here, Su is a parameter proportional to the creation rate of pore; Su’ and P0 are proportional to the permeability and to the initial fluid pressure, respectively. We found in the 2D modeling that slow fault growth can be simulated if we assume Su >> -P0 (Suzuki and Yamashita, 2009). Suzuki and Yamashita (2009) showed that the fluid inflow triggered by the pore creation tends to weaken the degree of slip-strengthening in the range Su >> -P0, which causes slow fault growth whose speed is dependent on the fluid inflow rate. However, if the value of Su is large enough, a nucleated event stops its growth soon after the nucleation because of intense slip-strengthening. Suzuki and Yamashita (2009) assumed that slip is
Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps
NASA Astrophysics Data System (ADS)
Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.
2014-12-01
Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.
Theory and application of experimental model analysis in earthquake engineering
NASA Astrophysics Data System (ADS)
Moncarz, P. D.
The feasibility and limitations of small-scale model studies in earthquake engineering research and practice is considered with emphasis on dynamic modeling theory, a study of the mechanical properties of model materials, the development of suitable model construction techniques and an evaluation of the accuracy of prototype response prediction through model case studies on components and simple steel and reinforced concrete structures. It is demonstrated that model analysis can be used in many cases to obtain quantitative information on the seismic behavior of complex structures which cannot be analyzed confidently by conventional techniques. Methodologies for model testing and response evaluation are developed in the project and applications of model analysis in seismic response studies on various types of civil engineering structures (buildings, bridges, dams, etc.) are evaluated.
The Dynamics of Sandpile Model and Its Application to Earthquakes
NASA Astrophysics Data System (ADS)
Gong, Yunfan
2007-03-01
Just from the simple yet widespread power laws, it seems unlikely to differentiate self-organized criticality (SOC) from other mechanisms proposed for power-law relationships. Here we report SOC phenomenon in a sandpile model driven by chaos. We characterize SOC by analyzing times series from the system. Surprisingly, we find that the microscopic dynamics of the complex sandpile system can be best approximated by a very simple one-order autoregressive (AR) model. Meanwhile, the AR model can well reproduce almost all power-law behaviors of the sandpile model, suggesting a similar dynamics between the complex sandpile system and the simple one-order AR model. Next, real earthquake time series including Harvard catalog and source time functions (STFs) are analyzed along the same lines. The one-order linear dynamics fitted from the STFs is in excellent agreement with that of the sandpile model, whereas the optimal two-order dynamics fitted from the STFs is a false mode and should be rejected. Our results support that earthquakes can be considered as a SOC process and suggest that they may be governed by sandpile models with high order (>=2) dynamics.
Modeling warning times for the Israel's earthquake early warning system
NASA Astrophysics Data System (ADS)
Pinsky, Vladimir
2015-01-01
In June 2012, the Israeli government approved the offer of the creation of an earthquake early warning system (EEWS) that would provide timely alarms for schools and colleges in Israel. A network configuration was chosen, consisting of a staggered line of ˜100 stations along the main regional faults: the Dead Sea fault and the Carmel fault, and an additional ˜40 stations spread more or less evenly over the country. A hybrid approach to the EEWS alarm was suggested, where a P-wave-based system will be combined with the S-threshold method. The former utilizes first arrivals to several stations closest to the event for prompt location and determination of the earthquake's magnitude from the first 3 s of the waveform data. The latter issues alarms, when the acceleration of the surface movement exceeds a threshold for at least two neighboring stations. The threshold will be chosen to be a peak acceleration level corresponding to a magnitude 5 earthquake at a short distance range (5-10 km). The warning times or lead times, i.e., times between the alarm signal arrival and arrival of the damaging S-waves, are considered for the P, S, and hybrid EEWS methods. For each of the approaches, the P- and the S-wave travel times and the alarm times were calculated using a standard 1D velocity model and some assumptions regarding the EEWS data latencies. Then, a definition of alarm effectiveness was introduced as a measure of the trade-off between the warning time and the shaking intensity. A number of strong earthquake scenarios, together with anticipated shaking intensities at important targets, namely cities with high populations, are considered. The scenarios demonstrated in probabilistic terms how the alarm effectiveness varies depending on the target distance from the epicenter and event magnitude.
Modeling warning times for the Israel's earthquake early warning system
NASA Astrophysics Data System (ADS)
Pinsky, Vladimir
2014-09-01
In June 2012, the Israeli government approved the offer of the creation of an earthquake early warning system (EEWS) that would provide timely alarms for schools and colleges in Israel. A network configuration was chosen, consisting of a staggered line of ˜100 stations along the main regional faults: the Dead Sea fault and the Carmel fault, and an additional ˜40 stations spread more or less evenly over the country. A hybrid approach to the EEWS alarm was suggested, where a P-wave-based system will be combined with the S-threshold method. The former utilizes first arrivals to several stations closest to the event for prompt location and determination of the earthquake's magnitude from the first 3 s of the waveform data. The latter issues alarms, when the acceleration of the surface movement exceeds a threshold for at least two neighboring stations. The threshold will be chosen to be a peak acceleration level corresponding to a magnitude 5 earthquake at a short distance range (5-10 km). The warning times or lead times, i.e., times between the alarm signal arrival and arrival of the damaging S-waves, are considered for the P, S, and hybrid EEWS methods. For each of the approaches, the P- and the S-wave travel times and the alarm times were calculated using a standard 1D velocity model and some assumptions regarding the EEWS data latencies. Then, a definition of alarm effectiveness was introduced as a measure of the trade-off between the warning time and the shaking intensity. A number of strong earthquake scenarios, together with anticipated shaking intensities at important targets, namely cities with high populations, are considered. The scenarios demonstrated in probabilistic terms how the alarm effectiveness varies depending on the target distance from the epicenter and event magnitude.
Comparison of Short-term and Long-term Earthquake Forecast Models for Southern California
NASA Astrophysics Data System (ADS)
Helmstetter, A.; Kagan, Y. Y.; Jackson, D. D.
2004-12-01
Many earthquakes are triggered in part by preceding events. Aftershocks are the most obvious examples, but many large earthquakes are preceded by smaller ones. The large fluctuations of seismicity rate due to earthquake interactions thus provide a way to improve earthquake forecasting significantly. We have developed a model to estimate daily earthquake probabilities in Southern California, using the Epidemic Type Earthquake Sequence model [Kagan and Knopoff, 1987; Ogata, 1988]. The forecasted seismicity rate is the sum of a constant external loading and of the aftershocks of all past earthquakes. The background rate is estimated by smoothing past seismicity. Each earthquake triggers aftershocks with a rate that increases exponentially with its magnitude and which decreases with time following Omori's law. We use an isotropic kernel to model the spatial distribution of aftershocks for small (M≤5.5) mainshocks, and a smoothing of the location of early aftershocks for larger mainshocks. The model also assumes that all earthquake magnitudes follow the Gutenberg-Richter law with a unifom b-value. We use a maximum likelihood method to estimate the model parameters and tests the short-term and long-term forecasts. A retrospective test using a daily update of the forecasts between 1985/1/1 and 2004/3/10 shows that the short-term model decreases the uncertainty of an earthquake occurrence by a factor of about 10.
Calculation of parity nonconservation in neutral ytterbium
Dzuba, V. A.; Flambaum, V. V.
2011-04-15
We use configuration interaction and many-body perturbation theory techniques to calculate spin-independent and spin-dependent parts of the parity-nonconserving amplitudes of the transitions between the 6s{sup 2} {sup 1}S{sub 0} ground state and the 6s5d {sup 3}D{sub 1} excited state of {sup 171}Yb and {sup 173}Yb. The results are presented in a form convenient for extracting spin-dependent interaction constants (such as anapole moment) from the measurements.
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Optimized volume models of earthquake-triggered landslides
NASA Astrophysics Data System (ADS)
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
NASA Astrophysics Data System (ADS)
Urrutia, J. D.; Bautista, L. A.; Baccay, E. B.
2014-04-01
The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. The mathematical models made are as follows: This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reductionin the Philippines, which is an earthquake-prone country.
A nonconservative scheme for isentropic gas dynamics
Chen, Gui-Qiang |; Liu, Jian-Guo
1994-05-01
In this paper, we construct a second-order nonconservative for the system of isentropic gas dynamics to capture the physical invariant regions for preventing negative density, to treat the vacuum singularity, and to control the local entropy from dramatically increasing near shock waves. The main difference in the construction of the scheme discussed here is that we use piecewise linear functions to approximate the Riemann invariants w and z instead of the physical variables {rho} and m. Our scheme is a natural extension of the schemes for scalar conservation laws and it can be numerical implemented easily because the system is diagonalized in this coordinate system. Another advantage of using Riemann invariants is that the Hessian matrix of any weak entropy has no singularity in the Riemann invariant plane w-z, whereas the Hessian matrices of the weak entropies have singularity at the vacuum points in the physical plane p-m. We prove that this scheme converges to an entropy solution for the Cauchy problem with L{sup {infinity}} initial data. By convergence here we mean that there is a subsequent convergence to a generalized solution satisfying the entrophy condition. As long as the entropy solution is unique, the whole sequence converges to a physical solution. This shows that this kind of scheme is quite reliable from theoretical view of point. In addition to being interested in the scheme itself, we wish to provide an approach to rigorously analyze nonconservative finite difference schemes.
Criticality and universality in a generalized earthquake model
Boulter, C.J.; Miller, G.
2005-01-01
We propose that an appropriate prototype for modeling self-organized criticality in dissipative systems is a generalized version of the two-variable cellular automata model introduced by Hergarten and Neugebauer [Phys. Rev. E 61, 2382 (2000)]. We show that the model predicts exponents for the event size distribution which are consistent with physically observed results for dissipative phenomena such as earthquakes. In addition we provide evidence that the model is critical based on both scaling analyses and direct observation of the distribution and behavior of the two variables in the interior of the lattice. We further argue that for reasonably large lattices the results are universal for all dissipative choices of the model parameters.
Newmark displacement model for landslides induced by the 2013 Ms 7.0 Lushan earthquake, China
NASA Astrophysics Data System (ADS)
Yuan, Renmao; Deng, Qinghai; Cunningham, Dickson; Han, Zhujun; Zhang, Dongli; Zhang, Bingliang
2016-01-01
Predicting approximate earthquake-induced landslide displacements is helpful for assessing earthquake hazards and designing slopes to withstand future earthquake shaking. In this work, the basic methodology outlined by Jibson (1993) is applied to derive the Newmark displacement of landslides based on strong ground-motion recordings during the 2013 Lushan Ms 7.0 earthquake. By analyzing the relationships between Arias intensity, Newmark displacement, and critical acceleration of the Lushan earthquake, formulas of the Jibson93 and its modified models are shown to be applicable to the Lushan earthquake dataset. Different empirical equations with new fitting coefficients for estimating Newmark displacement are then developed for comparative analysis. The results indicate that a modified model has a better goodness of fit and a smaller estimation error for the Jibson93 formula. It indicates that the modified model may be more reasonable for the dataset of the Lushan earthquake. The analysis of results also suggests that a global equation is not ideally suited to directly estimate the Newmark displacements of landslides induced by one specific earthquake. Rather it is empirically better to perform a new multivariate regression analysis to derive new coefficients for the global equation using the dataset of the specific earthquake. The results presented in this paper can be applied to a future co-seismic landslide hazard assessment to inform reconstruction efforts in the area affected by the 2013 Lushan Ms 7.0 earthquake, and for future disaster prevention and mitigation.
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.
2015-12-01
Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity
Numerical modeling of glacial earthquakes induced by iceberg capsize
NASA Astrophysics Data System (ADS)
Sergeant, A.; Yastrebov, V.; Castelnau, O.; Mangeney, A.; Stutzmann, E.; Montagner, J. P.; Burton, J. C.
2015-12-01
Glacial earthquakes is a class of seismic events of magnitude up to 5, occurring primarily in Greenland, in the margins of large marine-terminated glaciers with near-grounded termini. They are caused by calving of cubic-kilometer scale unstable icebergs which penetrate the full-glacier thickness and, driven by the buoyancy forces, capsize against the calving front. These phenomena produce seismic energy including surface waves with dominant energy between 10-150 s of period whose seismogenic source is compatible with the contact force exerted on the terminus by the iceberg while it capsizes. A reverse motion and posterior rebound of the terminus have also been measured and associated with the fluctuation of this contact force. Using a finite element model of iceberg and glacier terminus coupled with simplified fluid-structure interaction model, we simulate calving and capsize of icebergs. Contact and frictional forces are measured on the terminus and compared with laboratory experiments. We also study the influence of various factors, such as iceberg geometry, calving style and terminus interface. Being extended to field environments, the simulation results are compared with forces obtained by seismic waveform inversion of registered glacial earthquakes.
Short-term forecasting of Taiwanese earthquakes using a universal model of fusion-fission processes.
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M A; Johnson, Neil F
2014-01-10
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow.
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
The Global Earthquake Model and Disaster Risk Reduction
NASA Astrophysics Data System (ADS)
Smolka, A. J.
2015-12-01
Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all
A Hidden Markov Approach to Modeling Interevent Earthquake Times
NASA Astrophysics Data System (ADS)
Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.
2003-12-01
A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.
Magnetic moment nonconservation in magnetohydrodynamic turbulence models.
Dalena, S; Greco, A; Rappazzo, A F; Mace, R L; Matthaeus, W H
2012-07-01
The fundamental assumptions of the adiabatic theory do not apply in the presence of sharp field gradients or in the presence of well-developed magnetohydrodynamic turbulence. For this reason, in such conditions the magnetic moment μ is no longer expected to be constant. This can influence particle acceleration and have considerable implications in many astrophysical problems. Starting with the resonant interaction between ions and a single parallel propagating electromagnetic wave, we derive expressions for the magnetic moment trapping width Δμ (defined as the half peak-to-peak difference in the particle magnetic moments) and the bounce frequency ω(b). We perform test-particle simulations to investigate magnetic moment behavior when resonance overlapping occurs and during the interaction of a ring-beam particle distribution with a broadband slab spectrum. We find that the changes of magnetic moment and changes of pitch angle are related when the level of magnetic fluctuations is low, δB/B(0) = (10(-3),10(-2)), where B(0) is the constant and uniform background magnetic field. Stochasticity arises for intermediate fluctuation values and its effect on pitch angle is the isotropization of the distribution function f(α). This is a transient regime during which magnetic moment distribution f(μ) exhibits a characteristic one-sided long tail and starts to be influenced by the onset of spatial parallel diffusion, i.e., the variance <(Δz)(2)> grows linearly in time as in normal diffusion. With strong fluctuations f(α) becomes completely isotropic, spatial diffusion sets in, and the f(μ) behavior is closely related to the sampling of the varying magnetic field associated with that spatial diffusion.
Parity nonconservation in radioactive atoms: An experimental perspective
Vieira, D.
1994-11-01
The measurement of parity nonconservation (PNC) in atoms constitutes an important test of electroweak interactions in nuclei. Great progress has been made over the last 20 years in performing these measurements with ever increasing accuracies. To date the experimental accuracies have reached a level of 1 to 2%. In all cases, except for cesium, the theoretical atomic structure uncertainties now limit the comparison of these measurements to the predictions of the standard model. New measurements involving the ratio of Stark interference transition rates for a series of Cs or Fr radioisotopes are foreseen as a way of eliminating these atomic structure uncertainties. The use of magneto-optical traps to collect and concentrate the much smaller number of radioactive atoms that are produced is considered to be one of the key steps in realizing these measurements. Plans for how these measurements will be done and progress made to date are outlined.
Phase response curves for models of earthquake fault dynamics
NASA Astrophysics Data System (ADS)
Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen
2016-06-01
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.
Phase response curves for models of earthquake fault dynamics.
Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen
2016-06-01
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period. PMID:27368770
Phase response curves for models of earthquake fault dynamics.
Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen
2016-06-01
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.
Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
Regional Earthquake Likelihood Models: A realm on shaky grounds?
NASA Astrophysics Data System (ADS)
Kossobokov, V.
2005-12-01
Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency
NASA Astrophysics Data System (ADS)
Daniell, James
2010-05-01
This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure has been developed to provide a framework for optimisation of a Global Earthquake Modelling process through: 1) Overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost and technology); 2) Preliminary research, acquisition and familiarisation with all available ELE software packages; 3) Assessment of these 30+ software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4) Loss analysis for a deterministic earthquake (Mw7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment), a capacity spectrum based method HAZUS (HAZards United States) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach) software which was adapted for use in order to compare the different processes needed for the production of damage, economic and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data. Keywords: OPAL, displacement-based, DBELA, earthquake loss estimation, earthquake loss assessment, open source, HAZUS
Likelihood- and residual-based evaluation of medium-term earthquake forecast models for California
NASA Astrophysics Data System (ADS)
Schneider, Max; Clements, Robert; Rhoades, David; Schorlemmer, Danijel
2014-09-01
Seven competing models for forecasting medium-term earthquake rates in California are quantitatively evaluated using the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP). The model class consists of contrasting versions of the Every Earthquake a Precursor According to Size (EEPAS) and Proximity to Past Earthquakes (PPE) modelling approaches. Models are ranked by their performance on likelihood-based tests, which measure the consistency between a model forecast and observed earthquakes. To directly compare one model against another, we run a classical paired t-test and its non-parametric alternative on an information gain score based on the forecasts. These test scores are complemented by several residual-based methods, which offer detailed spatial information. The experiment period covers 2009 June-2012 September, when California experienced 23 earthquakes above the magnitude threshold. Though all models fail to capture seismicity during an earthquake sequence, spatio-temporal differences between models also emerge. The overall best-performing model has strong time- and magnitude-dependence, weights all earthquakes equally as medium-term precursors of larger events and has a full set of fitted parameters. Models with this time- and magnitude-dependence offer a statistically significant advantage over simpler baseline models. In addition, models that down-weight aftershocks when forecasting larger events have a desirable feature in that they do not overpredict following an observed earthquake sequence. This tendency towards overprediction differs between the simpler model, which is based on fewer parameters, and more complex models that include more parameters.
Modeling subduction megathrust earthquakes: Insights from a visco-elasto-plastic analog model
NASA Astrophysics Data System (ADS)
Dominguez, Stéphane; Malavieille, Jacques; Mazzotti, Stéphane; Martin, Nicolas; Caniven, Yannick; Cattin, Rodolphe; Soliva, Roger; Peyret, Michel; Lallemand, Serge
2015-04-01
As illustrated recently by the 2004 Sumatra-Andaman or the 2011 Tohoku earthquakes, subduction megathrust earthquakes generate heavy economic and human losses. Better constrain how such destructive seismic events nucleate and generate crustal deformations represents a major societal issue but appears also as a difficult scientific challenge. Indeed, several limiting factors, related to the difficulty to analyze deformation undersea, to access deep source of earthquake and to integrate the characteristic time scales of seismic processes, must be overcome first. With this aim, we have developed an experimental approach to complement numerical modeling techniques that are classically used to analyze available geological and geophysical observations on subduction earthquakes. Objectives were to design a kinematically and mechanically first-order scaled analogue model of a subduction zone capable of reproducing megathrust earthquakes but also realistic seismic cycle deformation phases. The model rheology is based on multi-layered visco-elasto-plastic materials to take into account the mechanical behavior of the overriding lithospheric plate. The elastic deformation of the subducting oceanic plate is also simulated. The seismogenic zone is characterized by a frictional plate interface whose mechanical properties can be adjusted to modify seismic coupling. Preliminary results show that this subduction model succeeds in reproducing the main deformation phases associated to the seismic cycle (interseismic elastic loading, coseismic rupture and post-seismic relaxation). By studying model kinematics and mechanical behavior, we expect to improve our understanding of seismic deformation processes and better constrain the role of physical parameters (fault friction, rheology, ...) as well as boundary conditions (loading rate,...) on seismic cycle and megathrust earthquake dynamics. We expect that results of this project will lead to significant improvement on interpretation of
Cooling magma model for deep volcanic long-period earthquakes
NASA Astrophysics Data System (ADS)
Aso, Naofumi; Tsai, Victor C.
2014-11-01
Deep long-period events (DLP events) or deep low-frequency earthquakes (deep LFEs) are deep earthquakes that radiate low-frequency seismic waves. While tectonic deep LFEs on plate boundaries are thought to be slip events, there have only been a limited number of studies on the physical mechanism of volcanic DLP events around the Moho (crust-mantle boundary) beneath volcanoes. One reasonable mechanism capable of producing their initial fractures is the effect of thermal stresses. Since ascending magma diapirs tend to stagnate near the Moho, where the vertical gradient of density is high, we suggest that cooling magma may play an important role in volcanic DLP event occurrence. Assuming an initial thermal perturbation of 400°C within a tabular magma of half width 41 m or a cylindrical magma of 74 m radius, thermal strain rates within the intruded magma are higher than tectonic strain rates of ~ 10-14 s-1 and produce a total strain of 2 × 10-4. Shear brittle fractures generated by the thermal strains can produce a compensated linear vector dipole mechanism as observed and potentially also explain the harmonic seismic waveforms from an excited resonance. In our model, we predict correlation between the particular shape of the cluster and the orientation of focal mechanisms, which is partly supported by observations of Aso and Ide (2014). To assess the generality of our cooling magma model as a cause for volcanic DLP events, additional work on relocations and focal mechanisms is essential and would be important to understanding the physical processes causing volcanic DLP events.
Modeling earthquake rate changes in Oklahoma and Arkansas: possible signatures of induced seismicity
Llenos, Andrea L.; Michael, Andrew J.
2013-01-01
The rate of ML≥3 earthquakes in the central and eastern United States increased beginning in 2009, particularly in Oklahoma and central Arkansas, where fluid injection has occurred. We find evidence that suggests these rate increases are man‐made by examining the rate changes in a catalog of ML≥3 earthquakes in Oklahoma, which had a low background seismicity rate before 2009, as well as rate changes in a catalog of ML≥2.2 earthquakes in central Arkansas, which had a history of earthquake swarms prior to the start of injection in 2009. In both cases, stochastic epidemic‐type aftershock sequence models and statistical tests demonstrate that the earthquake rate change is statistically significant, and both the background rate of independent earthquakes and the aftershock productivity must increase in 2009 to explain the observed increase in seismicity. This suggests that a significant change in the underlying triggering process occurred. Both parameters vary, even when comparing natural to potentially induced swarms in Arkansas, which suggests that changes in both the background rate and the aftershock productivity may provide a way to distinguish man‐made from natural earthquake rate changes. In Arkansas we also compare earthquake and injection well locations, finding that earthquakes within 6 km of an active injection well tend to occur closer together than those that occur before, after, or far from active injection. Thus, like a change in productivity, a change in interevent distance distribution may also be an indicator of induced seismicity.
Earthquake response analysis of RC bridges using simplified modeling approaches
NASA Astrophysics Data System (ADS)
Lee, Do Hyung; Kim, Dookie; Park, Taehyo
2009-07-01
In this paper, simplified modeling approaches describing the hysteretic behavior of reinforced concrete bridge piers are proposed. For this purpose, flexure-axial and shear-axial interaction models are developed and implemented into a nonlinear finite element analysis program. Comparative verifications for reinforced concrete columns prove that the analytical predictions obtained with the new formulations show good correlation with experimental results under various levels of axial forces and section types. In addition, analytical correlation studies for the inelastic earthquake response of reinforced concrete bridge structures are also carried out using the simplified modeling approaches. Relatively good agreement is observed in the results between the current modeling approach and the elaborated fiber models. It is thus encouraging that the present developments and approaches are capable of identifying the contribution of deformation mechanisms correctly. Subsequently, the present developments can be used as a simple yet effective tool for the deformation capacity evaluation of reinforced concrete columns in general and reinforced concrete bridge piers in particular.
NASA Astrophysics Data System (ADS)
Funning, G. J.; Ferreira, A. M.; Parsons, B. E.
2008-12-01
In the 15 years since the first InSAR study of the 1992 Landers earthquake, the first event to be studied using InSAR, over 50 events have been studied wholly or jointly using InSAR. This constitutes a rich archive of published studies that can be mined for information on earthquake phenomenology. Empirical earthquake scaling relationships, as can be inferred from estimates of fault dimensions, slip and moment for multiple earthquakes, are extensively used in seismic hazard forecasting, and also constitute a means of placing constraints on the bulk mechanical behaviour of the seismogenic upper crust. As a source of such data, studies that utilise information from InSAR have an advantage over seismic methods in that in many cases, a key parameter, the fault length, can be measured directly from the observations. In addition, in cases of good interferogram coherence, the high spatial density of surface deformation observations that InSAR affords can place tight constraints on fault width and other important parameters. We present here a preliminary survey of earthquake scaling relationships as supported by the existing archive of InSAR earthquake studies. We find that for events with Mw > 6, the data support moment scaling with the square of fault length, in keeping with the studies of Scholz and others, and imply proportionality between fault average slip and fault length. There are currently too few datapoints for great earthquakes (Mw > 8) to assess any proposed change in scaling for such events. Scatterplots of average slip versus fault length show two broad fields -- an area of high slip-to-length ratios (> 2 × 10-5) which are predominantly associated with faults with low long-term slip rates, predominantly from intraplate settings, and an area of lower slip-to-length ratios (< 2 × 10-5) which typically are larger events from faults with higher long-term slip rates (e.g. the North Anatolian and Kunlun faults, and the Peru-Chile subduction zone). In addition
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
Earthquake Response Modeling for a Parked and Operating Megawatt-Scale Wind Turbine
Prowell, I.; Elgamal, A.; Romanowitz, H.; Duggan, J. E.; Jonkman, J.
2010-10-01
Demand parameters for turbines, such as tower moment demand, are primarily driven by wind excitation and dynamics associated with operation. For that purpose, computational simulation platforms have been developed, such as FAST, maintained by the National Renewable Energy Laboratory (NREL). For seismically active regions, building codes also require the consideration of earthquake loading. Historically, it has been common to use simple building code approaches to estimate the structural demand from earthquake shaking, as an independent loading scenario. Currently, International Electrotechnical Commission (IEC) design requirements include the consideration of earthquake shaking while the turbine is operating. Numerical and analytical tools used to consider earthquake loads for buildings and other static civil structures are not well suited for modeling simultaneous wind and earthquake excitation in conjunction with operational dynamics. Through the addition of seismic loading capabilities to FAST, it is possible to simulate earthquake shaking in the time domain, which allows consideration of non-linear effects such as structural nonlinearities, aerodynamic hysteresis, control system influence, and transients. This paper presents a FAST model of a modern 900-kW wind turbine, which is calibrated based on field vibration measurements. With this calibrated model, both coupled and uncoupled simulations are conducted looking at the structural demand for the turbine tower. Response is compared under the conditions of normal operation and potential emergency shutdown due the earthquake induced vibrations. The results highlight the availability of a numerical tool for conducting such studies, and provide insights into the combined wind-earthquake loading mechanism.
Acceleration modeling of moderate to large earthquakes based on realistic fault models
NASA Astrophysics Data System (ADS)
Arvidsson, R.; Toral, J.
2003-04-01
Strong motion is affected by distance to the earthquake, local crustal structure, focal mechanism, azimuth to the source. However, the faulting process is also of importance such as development of rupture, i.e., directivity, slip distribution on the fault, extent of fault, rupture velocity. We have modelled these parameters for earthquakes that occurred in three tectonic zones close to the Panama Canal. We included in the modeling directivity, distributed slip, discrete faulting, fault depth and expected focal mechanism. The distributed slip is based on previous fault models that we produced from the region of other earthquakes. Such previous examples show that maximum intensities in some cases coincides with areas of high slip on the fault. Our acceleration modeling also gives similar values to the few observations that have been made for moderate to small earthquakes in the range M=5-6.2. The modeling indicates that events located in the Caribbean might cause strong motion in the lower frequency spectra where high frequency Rayleigh waves dominates.
Esmer, Oezcan
2006-11-29
This paper first evaluates the earthquake prediction method (1999 ) used by US Geological Survey as the lead example and reviews also the recent models. Secondly, points out the ongoing debate on the predictability of earthquake recurrences and lists the main claims of both sides. The traditional methods and the 'frequentist' approach used in determining the earthquake probabilities cannot end the complaints that the earthquakes are unpredictable. It is argued that the prevailing 'crisis' in seismic research corresponds to the Pre-Maxent Age of the current situation. The period of Kuhnian 'Crisis' should give rise to a new paradigm based on the Information-Theoric framework including the inverse problem, Maxent and Bayesian methods. Paper aims to show that the information- theoric methods shall provide the required 'Methodica Firma' for the earthquake prediction models.
Nonconservative current-induced forces: A physical interpretation.
Todorov, Tchavdar N; Dundas, Daniel; Paxton, Anthony T; Horsfield, Andrew P
2011-01-01
We give a physical interpretation of the recently demonstrated nonconservative nature of interatomic forces in current-carrying nanostructures. We start from the analytical expression for the curl of these forces, and evaluate it for a point defect in a current-carrying system. We obtain a general definition of the capacity of electrical current flow to exert a nonconservative force, and thus do net work around closed paths, by a formal noninvasive test procedure. Second, we show that the gain in atomic kinetic energy over time, generated by nonconservative current-induced forces, is equivalent to the uncompensated stimulated emission of directional phonons. This connection with electron-phonon interactions quantifies explicitly the intuitive notion that nonconservative forces work by angular momentum transfer.
NASA Astrophysics Data System (ADS)
Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.
2011-09-01
We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.
Finite-Source Modeling of the South Napa Earthquake
NASA Astrophysics Data System (ADS)
Dreger, D. S.; Huang, M. H.; Wooddell, K. E.; Taira, T.; Luna, B.
2014-12-01
On August 24 2014 an Mw 6.0 earthquake struck south-southwest of the city of Napa, California. As part of the Berkeley Seismological Laboratory (BSL) Alarm Response a seismic moment tensor solution and preliminary finite-source model were estimated. The preliminary finite-source model used high quality three-component strong motion recordings, instrument corrected and integrated to displacement, from 8 stations of the BSL BK network for stations located between 30 to 200 km. The BSL focal mechanism (strike=155, dip=82, rake=-172), and a constant rise time and rupture velocity were assumed. The GIL7 plane-layered velocity model was used to compute Green's functions using a frequency wave-number integration approach. The preliminary model from these stations indicates the rupture was unilateral to the NNW, and up dip with a average slip of 42 cm and peak slip of 102 cm. The total scalar moment was found to be 1.15*1025 dyne cm giving a Mw 6.0.The strong directivity from the rupture likely leads to the observed elevated local strong ground motions and the extensive damage to buildings in Napa and surrounding residential areas. In this study we will reevaluate the seismic moment tensor of the mainshock and larger aftershocks, and incorporate local strong motion waveforms, GPS, and InSAR deformation data to better constrain the finite-source model. While the hypocenter and focal parameters used in the preliminary model are consistent with the mapped surface trace of the west Napa fault, the mapped surface slip lies approximately 2 km to the west. Furthermore there is a pronounced change in strike of the mapped surface offsets at the northern end. We will investigate the location of the fault model and the fit to the joint data set as well as examine the possibility of multi-segmented fault models to account for these apparently inconsistent observations.
Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L
2007-02-09
We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
NASA Astrophysics Data System (ADS)
Satake, K.; Fujii, Y.; Harada, T.; Namegaya, Y.
2012-04-01
The 11 March tsunami from the off Pacific coast of Tohoku earthquake (M 9.0) was recorded instrumentally on coastal and offshore gauges. The ocean bottom pressure (OBP) and GPS wave gauges within the source area, in particular, showed two-stage tsunami waveforms, i.e., a gradual increase of water level followed by an impulsive tsunami wave. The coastal run-up and inundation heights were also measured by many researchers, and the large peak appeared around Miyako in Iwate prefecture. Our previous result of tsunami waveform inversion (Fujii et al., 2011, Earth Planets and Space) assuming a simultaneous rupture of subfaults indicated that the largest slip (~48 m) occurred near the trench axis off Miyagi. However, the computed coastal tsunami heights from this model did not reproduce the distribution of the measured tsunami heights. Here we introduced multiple time-window analysis assuming a constant rupture velocity, and estimated the slip distribution both in space and time. We also used tsunami waveforms recorded at more gauges than the previous study. In total, we used 11 OBP gauges, 10 GPS wave gauges and 32 coastal tide or wave gauges. The new result indicates that the fault slip propagated from the epicenter and took about 3 minutes to reach the northern and southern ends of the source area. The large slip along the Japan trench axis is more extended than the previous result, with the maximum slip of 37 m. The northern rupture includes the source area of the 1896 Sanriku earthquake, although very large slip occurred just to the south of the 1896 source. This offshore slip is responsible to the second-stage tsunami, i.e., the large impulsive peak. Large slip on the plate interface (< 30 m) around the epicenter is responsible to the first-stage tsunami, i.e., initial water rise and large tsunami inundation in Sendai plain. This slip on the plate interface is very similar to a model of the 869 Jogan earthquake, which we previously proposed, and produces large (> a
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
Numerical modelling of iceberg calving force responsible for glacial earthquakes
NASA Astrophysics Data System (ADS)
Sergeant, Amandine; Yastrebov, Vladislav; Castelnau, Olivier; Mangeney, Anne; Stutzmann, Eleonore; Montagner, Jean-Paul
2016-04-01
Glacial earthquakes is a class of seismic events of magnitude up to 5, occurring primarily in Greenland, in the margins of large marine-terminated glaciers with near-grounded termini. They are caused by calving of cubic-kilometer scale unstable icebergs which penetrate the full-glacier thickness and, driven by the buoyancy forces, capsize against the calving front. These phenomena produce seismic energy including surface waves with dominant energy between 10-150 s of period whose seismogenic source is compatible with the contact force exerted on the terminus by the iceberg while it capsizes. A reverse motion and posterior rebound of the terminus have also been measured and associated with the fluctuation of this contact force. Using a finite element model of iceberg and glacier terminus coupled with simplified fluid-structure interaction model, we simulate calving and capsize of icebergs. Contact and frictional forces are measured on the terminus and compared with laboratory experiments. We also study the influence of geometric factors on the force history, amplitude and duration at the laboratory and field scales. We show first insights into the force and the generated seismic waves exploring different scenarios for iceberg capsizing.
A model of return intervals between earthquake events
NASA Astrophysics Data System (ADS)
Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger
2016-06-01
Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.
Simulational Studies of a 2-DIMENSIONAL Burridge - Model for Earthquakes
NASA Astrophysics Data System (ADS)
Ross, John Bernard
1993-01-01
A two-dimensional cellular automaton version of the Burridge-Knopoff (BK) model for earthquakes is studied. The model consists of a lattice of blocks connected by springs, subject to static friction and driven at a rate v by an externally applied force. A block ruptures provided that its total stress matches or exceeds static friction. The distance it moves is proportional to the total stress, alpha of which it releases to each of its neighbors and 1 - qalpha<=aves the system, where q is the number of neighbors. The BK model with nearest neighbor (q = 4) and long range (q = 24) interactions is simulated for spatially uniform and random static friction on lattices with periodic, open, closed, and fixed boundary conditions. In the nearest neighbor model, the system appears to have a spinodal critical point at v = v_{c} in all cases except for closed boundaries and uniform thresholds, where the system appears to be self-organized critical. The dynamics of the model is always periodic or quasiperiodic for non-closed boundaries and uniform thresholds. The stress is "quantized" in multiples of the loader force in this case. A mean field theory is presented from which v _{c} and the dominant period of oscillation is derived, which agree well with the data. v_{c} varies inversely with the number of neighbors to which each blocks is attached and, as a result, goes to zero as the range of the springs goes to infinity. This is consistent with the behavior of a spinodal critical point as the range of interactions goes to infinity. The quasistatic limit of tectonic loading is thus recovered.
Use of GPS and InSAR Technology and its Further Development in Earthquake Modeling
NASA Technical Reports Server (NTRS)
Donnellan, A.; Lyzenga, G.; Argus, D.; Peltzer, G.; Parker, J.; Webb, F.; Heflin, M.; Zumberge, J.
1999-01-01
Global Positioning System (GPS) data are useful for understanding both interseismic and postseismic deformation. Models of GPS data suggest that the lower crust, lateral heterogeneity, and fault slip, all provide a role in the earthquake cycle.
NASA Astrophysics Data System (ADS)
Zhang, Zhe; Xun, Zhi-Peng; Wu, Ling; Chen, Yi-Li; Xia, Hui; Hao, Da-Peng; Tang, Gang
2016-06-01
In order to study the effects of the microscopic details of fractal substrates on the scaling behavior of the growth model, a generalized linear fractal Langevin-type equation, ∂h / ∂t =(- 1) m + 1 ν∇ mzrw h (zrw is the dynamic exponent of random walk on substrates), driven by nonconserved and conserved noise is proposed and investigated theoretically employing scaling analysis. Corresponding dynamic scaling exponents are obtained.
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions
So, Emily; Spence, Robin
2013-01-01
Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.
NASA Astrophysics Data System (ADS)
Jiang, Chang-Sheng; Wu, Zhong-Liang
2005-05-01
Long-term seismic activity prior to the December 26, 2004, off the west coast of northern Sumatra, Indonesia, M W=9.0 earthquake was investigated using the Harvard CMT catalogue. It is observed that before this great earthquake, there exists an accelerating moment release (AMR) process with the temporal scale of a quarter century and the spatial scale of 1 500 km. Within this spatial range, the M W=9.0 event falls into the piece-wise power-law-like frequency-magnitude distribution. Therefore, in the perspective of the critical-point-like model of earthquake preparation, the failure to forecast/predict the approaching and/or the size of this earthquake is not due to the physically intrinsic unpredictability of earthquakes.
NASA Astrophysics Data System (ADS)
Furlong, Kevin P.; Govers, Rob; Herman, Matthew
2016-04-01
Subduction zone megathrusts host the largest and deadliest earthquakes on the planet. Over the past decades (primarily since the 2004 Sumatra event) our abilities to observe the build-up in slip deficit along these plate boundary zones has improved substantially with the development of relatively dense observing systems along major subduction zones. One, perhaps unexpected, result from these observations is a range of present-day behavior along the boundaries. Some regions show displacements (almost always observed on the upper plate along the boundary) that are consistent with elastic deformation driven by a fully locked plate interface, while other plate boundary segments (oftentimes along the same plate boundary system) show little or no plate motion directed displacements. This latter case is often interpreted as reflecting little to no coupling along the plate boundary interface. What is unclear is whether this spatial variation in apparent plate boundary interface behavior reflects true spatial differences in plate interface properties and mechanics, or may rather reflect temporal behavior of the plate boundary during the earthquake cycle. In our integrated observational and modeling analyses, we have come to the conclusion that much of what is seen as diverse behavior along subduction margins represents different time in the earthquake cycle (relative to recurrence rate and material properties) rather than fundamental differences between subduction zone mechanics. Our model-constrained conceptual model accounts for the following generalized observations: 1. Coseismic displacements are enhanced in "near-trench" region 2. Post-seismic relaxation varies with time and position landward - i.e. there is a propagation of the transition point from "post" (i.e. trenchward) to "inter" (i.e. landward) seismic displacement behavior. 3. Displacements immediately post-EQ (interpreted to be associated with "after slip" on megathrust?). 4. The post-EQ transient response can
Earthquake sequencing: chimera states with Kuramoto model dynamics on directed graphs
NASA Astrophysics Data System (ADS)
Vasudevan, K.; Cavers, M.; Ware, A.
2015-09-01
Earthquake sequencing studies allow us to investigate empirical relationships among spatio-temporal parameters describing the complexity of earthquake properties. We have recently studied the relevance of Markov chain models to draw information from global earthquake catalogues. In these studies, we considered directed graphs as graph theoretic representations of the Markov chain model and analyzed their properties. Here, we look at earthquake sequencing itself as a directed graph. In general, earthquakes are occurrences resulting from significant stress interactions among faults. As a result, stress-field fluctuations evolve continuously. We propose that they are akin to the dynamics of the collective behavior of weakly coupled non-linear oscillators. Since mapping of global stress-field fluctuations in real time at all scales is an impossible task, we consider an earthquake zone as a proxy for a collection of weakly coupled oscillators, the dynamics of which would be appropriate for the ubiquitous Kuramoto model. In the present work, we apply the Kuramoto model with phase lag to the non-linear dynamics on a directed graph of a sequence of earthquakes. For directed graphs with certain properties, the Kuramoto model yields synchronization, and inclusion of non-local effects evokes the occurrence of chimera states or the co-existence of synchronous and asynchronous behavior of oscillators. In this paper, we show how we build the directed graphs derived from global seismicity data. Then, we present conditions under which chimera states could occur and, subsequently, point out the role of the Kuramoto model in understanding the evolution of synchronous and asynchronous regions. We surmise that one implication of the emergence of chimera states will lead to investigation of the present and other mathematical models in detail to generate global chimera-state maps similar to global seismicity maps for earthquake forecasting studies.
Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software (OPAL)
NASA Astrophysics Data System (ADS)
Daniell, J. E.
2011-07-01
This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure was created to provide a framework for optimisation of a Global Earthquake Modelling process through: 1. overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost, and technology); 2. preliminary research, acquisition, and familiarisation for available ELE software packages; 3. assessment of these software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4. loss analysis for a deterministic earthquake (Mw = 7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment, Crowley et al., 2006), a capacity spectrum based method HAZUS (HAZards United States, FEMA, USA, 2003) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach, Lindholm et al., 2007) software which was adapted for use in order to compare the different processes needed for the production of damage, economic, and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data.
Modeling of electromagnetic E-layer waves before earthquakes
NASA Astrophysics Data System (ADS)
Meister, Claudia-Veronika; Hoffmann, Dieter H. H.
2013-04-01
A dielectric model for electromagnetic (EM) waves in the Earth's E-layer is developed. It is assumed that these waves are driven by acoustic-type waves, which are caused by earthquake precursors. The dynamics of the plasma system and the EM waves is described using the multi-component magnetohydrodynamic (MHD) theory. The acoustic waves are introduced as neutral gas wind. The momentum transfer between the charged particles in the MHD system is mainly caused via the collisions with the neutral gas. From the MHD system, relations for the velocity fluctuations of the particles are found, which consist of products of the electric field fluctuations times coefficients α which only depend on the plasma background parameters. A quick FORTRAN program is developed, to calculate these coefficients (solution of 9x9-matrix equations). Models of the altitudinal scales of the background plasma parameters and the fluctuations of the plasma parameters and the EM field are introduced. Besides, in case of the electric wave field, a method is obtained to calculate the altitudinal scale ? of the amplitude (based on the Poisson equation and knowing the coefficients α). Finally, a general dispersion relation is found, where α, ? and the altitudinal profile of ? appear as parameters (which were found in the numerical model before). Thus, the dispersion relations of EM waves caused by acoustic-type ones during times of seismic activity may be studied numerically. Besides, an expression for the related temperature fluctuations is derived, which depends on the dispersion of the excited EM waves, α, ? and the background plasma parameters. So, heating processes in the atmosphere may be investigated.
NASA Astrophysics Data System (ADS)
Crowley, H.; Modica, A.
2009-04-01
Loss estimates have been shown in various studies to be highly sensitive to the methodology employed, the seismicity and ground-motion models, the vulnerability functions, and assumed replacement costs (e.g. Crowley et al., 2005; Molina and Lindholm, 2005; Grossi, 2000). It is clear that future loss models should explicitly account for these epistemic uncertainties. Indeed, a cause of frequent concern in the insurance and reinsurance industries is precisely the fact that for certain regions and perils, available commercial catastrophe models often yield significantly different loss estimates. Of equal relevance to many users is the fact that updates of the models sometimes lead to very significant changes in the losses compared to the previous version of the software. In order to model the epistemic uncertainties that are inherent in loss models, a number of different approaches for the hazard, vulnerability, exposure and loss components should be clearly and transparently applied, with the shortcomings and benefits of each method clearly exposed by the developers, such that the end-users can begin to compare the results and the uncertainty in these results from different models. This paper looks at an application of a logic-tree type methodology to model the epistemic uncertainty in the vulnerability component of a loss model for Tunisia. Unlike other countries which have been subjected to damaging earthquakes, there has not been a significant effort to undertake vulnerability studies for the building stock in Tunisia. Hence, when presented with the need to produce a loss model for a country like Tunisia, a number of different approaches can and should be applied to model the vulnerability. These include empirical procedures which utilise observed damage data, and mechanics-based methods where both the structural characteristics and response of the buildings are analytically modelled. Some preliminary applications of the methodology are presented and discussed
Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2
Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.
2008-01-01
This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.
NASA Astrophysics Data System (ADS)
DeVries, Phoebe M. R.; Krastev, Plamen G.; Meade, Brendan J.
2016-07-01
Over the past 80 years, 8 MW > 6.7 strike-slip earthquakes west of 40° longitude have ruptured the North Anatolian fault (NAF) from east to west. The series began with the 1939 Erzincan earthquake in eastern Turkey, and the most recent 1999 MW = 7.4 Izmit earthquake extended the pattern of ruptures into the Sea of Marmara in western Turkey. The mean time between seismic events in this westward progression is 8.5 ± 11 years (67% confidence interval), much greater than the timescale of seismic wave propagation (seconds to minutes). The delayed triggering of these earthquakes may be explained by the propagation of earthquake-generated diffusive viscoelastic fronts within the upper mantle that slowly increase the Coulomb failure stress change (ΔCFS) at adjacent hypocenters. Here we develop three-dimensional stress transfer models with an elastic upper crust coupled to a viscoelastic Burgers rheology mantle. Both the Maxwell (ηM = 4 × 1018-1 × 1019 Pa s) and Kelvin (ηK = 1 × 1018-1 × 1019 Pa s) viscosities are constrained by studies of geodetic observations before and after the 1999 Izmit earthquake. We combine this geodetically constrained rheological model with the observed sequence of large earthquakes since 1939 to calculate the time evolution of ΔCFS changes along the North Anatolian fault due to viscoelastic stress transfer. Apparent threshold values of mean ΔCFS at which the earthquakes in the eight decade sequence occur are between ˜0.02 to ˜3.15 MPa and may exceed the magnitude of static ΔCFS values by as much as 177%. By 2023, we infer that the mean time-dependent stress change along the northern NAF strand in the Marmara Sea near Istanbul, which may have previously ruptured in 1766, may reach the mean apparent time-dependent stress thresholds of the previous NAF earthquakes.
“SLIMPLECTIC” INTEGRATORS: VARIATIONAL INTEGRATORS FOR GENERAL NONCONSERVATIVE SYSTEMS
Tsang, David; Turner, Alec; Galley, Chad R.; Stein, Leo C.
2015-08-10
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
"Slimplectic" Integrators: Variational Integrators for General Nonconservative Systems
NASA Astrophysics Data System (ADS)
Tsang, David; Galley, Chad R.; Stein, Leo C.; Turner, Alec
2015-08-01
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.
Instability model for recurring large and great earthquakes in southern California
Stuart, W.D.
1985-01-01
The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.
An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling
Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.
2009-01-01
We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global
Modeling the foreshock sequence prior to the 2011, MW9.0 Tohoku, Japan, earthquake
NASA Astrophysics Data System (ADS)
Marsan, D.; Enescu, B.
2012-06-01
The 2011 MW9.0 Tohoku earthquake, Japan, was preceded by a 2 day-long foreshock sequence, initiated by a MW7.3 earthquake. We analyze this foreshock sequence, with the aim of detecting possible aseismic deformation transients that could have driven its evolution. Continuous broad-band recordings at F-net stations are processed to identify as exhaustive a set of mJMA > 1.2 earthquakes as possible. We moreover directly quantify with these recordings the changes in detection level associated with changes in seismic or environmental noise. This earthquake data set is then modeled, to show that the whole sequence can be readily explained without the need to invoke aseismic transients. The observation of a 3-hour long low-frequency noise increase, concurrent with an apparent migration of seismicity toward the epicenter of the impending MW9.0 mega-thrust earthquake, however suggests that some premonitory slip could have played a role in loading the asperity which failure initiated the MW9.0 shock. We thus propose that this aseismic slip, if it really existed, had only a minor role in triggering and southward displacing the foreshock sequence, as compared to earthquake interaction mechanisms that allow earthquakes to trigger one another.
SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records
Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.
2013-01-01
This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.
Breit interaction and parity nonconservation in many-electron atoms
Dzuba, V. A.; Flambaum, V. V.; Safronova, M. S.
2006-02-15
We present accurate ab initio nonperturbative calculations of the Breit correction to the parity nonconserving (PNC) amplitudes of the 6s-7s and 6s-5d{sub 3/2} transitions in Cs, 7s-8s and 7s-6d{sub 3/2} transitions in Fr, 6s-5d{sub 3/2} transition in Ba{sup +}, 7s-6d{sub 3/2} transition in Ra{sup +}, and 6p{sub 1/2}-6p{sub 3}/{sub 2} transition in Tl. The results for the 6s-7s transition in Cs and 7s-8s transition in Fr are in good agreement with other calculations. We demonstrate that higher-orders many-body corrections to the Breit interaction are especially important for the s-d PNC amplitudes. We confirm good agreement of the PNC measurements for cesium and thallium with the standard model.
Modeling And Economics Of Extreme Subduction Earthquakes: Two Case Studies
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Emerson, D.; Perea, N.; Moulinec, C.
2008-05-01
The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K
ARMA models for earthquake ground motions. Seismic safety margins research program
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.; Oliver, R. M.; Pister, K. S.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulating earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
An improved geodetic source model for the 1999 Mw 6.3 Chamoli earthquake, India
NASA Astrophysics Data System (ADS)
Xu, Wenbin; Bürgmann, Roland; Li, Zhiwei
2016-04-01
We present a distributed slip model for the 1999 Mw 6.3 Chamoli earthquake of north India using interferometric synthetic aperture radar (InSAR) data from both ascending and descending orbits and Bayesian estimation of confidence levels and trade-offs of the model geometry parameters. The results of fault-slip inversion in an elastic half-space show that the earthquake ruptured a 9°_{-2.2}^{+3.4} northeast-dipping plane with a maximum slip of ˜1 m. The fault plane is located at a depth of ˜15.9_{ - 3.0}^{ + 1.1} km and is ˜120 km north of the Main Frontal Thrust, implying that the rupture plane was on the northernmost detachment near the mid-crustal ramp of the Main Himalayan Thrust. The InSAR-determined moment is 3.35 × 1018 Nm with a shear modulus of 30 GPa, equivalent to Mw 6.3, which is smaller than the seismic moment estimates of Mw 6.4-6.6. Possible reasons for this discrepancy include the trade-off between moment and depth, uncertainties in seismic moment tensor components for shallow dip-slip earthquakes and the role of earth structure models in the inversions. The released seismic energy from recent earthquakes in the Garhwal region is far less than the accumulated strain energy since the 1803 Ms 7.5 earthquake, implying substantial hazard of future great earthquakes.
Dynamic Models of Earthquakes and Tsunamis in the Santa Barbara Channel, California
NASA Astrophysics Data System (ADS)
Oglesby, David; Ryan, Kenny; Geist, Eric
2016-04-01
The Santa Barbara Channel and the adjacent Ventura Basin in California are the location of a number of large faults that extend offshore and could potentially produce earthquakes of magnitude greater than 7. The area is also home to hundreds of thousands of coastal residents. To properly evaluate the earthquake and tsunami hazard in this region requires the characterization of possible earthquake sources as well as the analysis of tsunami generation, propagation and inundation. Toward this end, we perform spontaneous dynamic earthquake rupture models of potential events on the Pitas Point/Lower Red Mountain faults, a linked offshore thrust fault system. Using the 3D finite element method, a realistic nonplanar fault geometry, and rate-state friction, we find that this fault system can produce an earthquake of up to magnitude 7.7, consistent with estimates from geological and paleoseismological studies. We use the final vertical ground deformation from our models as initial conditions for the generation and propagation of tsunamis to the shore, where we calculate inundation. We find that path and site effects lead to large tsunami amplitudes northward and eastward of the fault system, and in particular we find significant tsunami inundation in the low-lying cities of Ventura and Oxnard. The results illustrate the utility of dynamic earthquake modeling to produce physically plausible slip patterns and associated seafloor deformation that can be used for tsunami generation.
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Krastev, P. G.; Meade, B. J.
2015-12-01
Over the past 80 years, 8 MW>6.7 strike-slip earthquakes west of 40º longitude have ruptured the North Anatolian fault (NAF), largely from east to west. The series began with the 1939 Erzincan earthquake in eastern Turkey, and the most recent 1999 MW=7.4 Izmit earthquake extended the pattern of ruptures into the Sea of Marmara in western Turkey. The mean time between seismic events in this westward progression is 8.5±11 years (67% confidence interval), much greater than the timescale of seismic wave propagation (seconds to minutes). The delayed triggering of these earthquakes may be explained by the propagation of earthquake-generated diffusive viscoelastic fronts within the upper mantle that slowly increase the Coulomb failure stress change (CFS) at adjacent hypocenters. Here we develop three-dimensional stress transfer models with an elastic upper crust coupled to a viscoelastic Burgers rheology mantle. Both the Maxwell (ηM=1018.6-19.0 Pa•s) and Kelvin (ηK=1018.0-19.0 Pa•s) viscosities are constrained by viscoelastic block models that simultaneously explain geodetic observations of deformation before and after the 1999 Izmit earthquake. We combine this geodetically constrained rheological model with the observed sequence of large earthquakes since 1939 to calculate the time-evolution of CFS changes along the North Anatolian Fault due to viscoelastic stress transfer. Critical values of mean CFS at which the earthquakes in the eight decade sequence occur between -0.007 to 2.946 MPa and may exceed the magnitude of static CFS values by as much as 180%. The variability of four orders of magnitude in critical triggering stress may reflect along strike variations in NAF strength. Based on the median and mean of these critical stress values, we infer that the NAF strand in the northern Marmara Sea near Istanbul, which previously ruptured in 1509, may reach a critical stress level between 2015 and 2032.
Bounded solutions for nonconserving-parity pseudoscalar potentials
Castro, Antonio S. de; Malheiro, Manuel; Lisboa, Ronai
2004-12-02
The Dirac equation is analyzed for nonconserving-parity pseudoscalar radial potentials in 3+1 dimensions. It is shown that despite the nonconservation of parity this general problem can be reduced to a Sturm-Liouville problem of nonrelativistic fermions in spherically symmetric effective potentials. The searching for bounded solutions is done for the power-law and Yukawa potentials. The use of the methodology of effective potentials allow us to conclude that the existence of bound-state solutions depends whether the potential leads to a definite effective potential-well structure or to an effective potential less singular than -1/4r2.
Locating and Modeling Regional Earthquakes with Broadband Waveform Data
NASA Astrophysics Data System (ADS)
Tan, Y.; Zhu, L.; Helmberger, D.
2003-12-01
Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and
Cross-cultural comparisons between the earthquake preparedness models of Taiwan and New Zealand.
Jang, Li-Ju; Wang, Jieh-Jiuh; Paton, Douglas; Tsai, Ning-Yu
2016-04-01
Taiwan and New Zealand are both located in the Pacific Rim where 81 per cent of the world's largest earthquakes occur. Effective programmes for increasing people's preparedness for these hazards are essential. This paper tests the applicability of the community engagement theory of hazard preparedness in two distinct cultural contexts. Structural equation modelling analysis provides support for this theory. The paper suggests that the close fit between theory and data that is achieved by excluding trust supports the theoretical prediction that familiarity with a hazard negates the need to trust external sources. The results demonstrate that the hazard preparedness theory is applicable to communities that have previously experienced earthquakes and are therefore familiar with the associated hazards and the need for earthquake preparedness. The paper also argues that cross-cultural comparisons provide opportunities for collaborative research and learning as well as access to a wider range of potential earthquake risk management strategies. PMID:26282331
Cross-cultural comparisons between the earthquake preparedness models of Taiwan and New Zealand.
Jang, Li-Ju; Wang, Jieh-Jiuh; Paton, Douglas; Tsai, Ning-Yu
2016-04-01
Taiwan and New Zealand are both located in the Pacific Rim where 81 per cent of the world's largest earthquakes occur. Effective programmes for increasing people's preparedness for these hazards are essential. This paper tests the applicability of the community engagement theory of hazard preparedness in two distinct cultural contexts. Structural equation modelling analysis provides support for this theory. The paper suggests that the close fit between theory and data that is achieved by excluding trust supports the theoretical prediction that familiarity with a hazard negates the need to trust external sources. The results demonstrate that the hazard preparedness theory is applicable to communities that have previously experienced earthquakes and are therefore familiar with the associated hazards and the need for earthquake preparedness. The paper also argues that cross-cultural comparisons provide opportunities for collaborative research and learning as well as access to a wider range of potential earthquake risk management strategies.
Modelling and Observing the 8.8 Chile and 9.0 Japan Earthquakes Using GOCE
NASA Astrophysics Data System (ADS)
Broerse, T.; Visser, P.; Bouman, J.; Fuchs, M.; Vermeersen, B.; Schmidt, M.
2011-07-01
Large earthquakes do not only heavily deform the crust in the vicinity of the fault, they also change the gravity field of the area affected by the earthquake due to mass redistribution in the upper layers of the Earth. Besides that, for sub-oceanic earthquakes deformation of the ocean floor causes relative sea-level changes and mass redistribution of water that has again a significant effect on the gravity field. Such a sub-oceanic earthquake occurred on 27 February 2010 in central Chili with a magnitude of Mw 8.8 and on 11 March 2011 with a magnitude of Mw 9.0 near the east coast of Honshu, Japan. This makes both a potential candidate for detecting the co-seismic gravity changes in the GOCE gradiometer data. We will assess the detectability of gravity field changes in the GOCE gravity gradients by modelling these earthquakes using a forward model as well as taking differences of GOCE data before and after the respective earthquakes.
Analysing Post-Seismic Deformation of Izmit Earthquake with Insar, Gnss and Coulomb Stress Modelling
NASA Astrophysics Data System (ADS)
Alac Barut, R.; Trinder, J.; Rizos, C.
2016-06-01
On August 17th 1999, a Mw 7.4 earthquake struck the city of Izmit in the north-west of Turkey. This event was one of the most devastating earthquakes of the twentieth century. The epicentre of the Izmit earthquake was on the North Anatolian Fault (NAF) which is one of the most active right-lateral strike-slip faults on earth. However, this earthquake offers an opportunity to study how strain is accommodated in an inter-segment region of a large strike slip fault. In order to determine the Izmit earthquake post-seismic effects, the authors modelled Coulomb stress changes of the aftershocks, as well as using the deformation measurement techniques of Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS). The authors have shown that InSAR and GNSS observations over a time period of three months after the earthquake combined with Coulomb Stress Change Modelling can explain the fault zone expansion, as well as the deformation of the northern region of the NAF. It was also found that there is a strong agreement between the InSAR and GNSS results for the post-seismic phases of investigation, with differences less than 2mm, and the standard deviation of the differences is less than 1mm.
Time-predictable model applicability for earthquake occurrence in northeast India and vicinity
NASA Astrophysics Data System (ADS)
Panthi, A.; Shanker, D.; Singh, H. N.; Kumar, A.; Paudyal, H.
2011-03-01
Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986) for the period 1906-2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp) and not on the following mainshock (Mf). This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T) of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.
PAGER-CAT: A composite earthquake catalog for calibrating global fatality models
Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.
2009-01-01
We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are
NASA Astrophysics Data System (ADS)
Haeussler, P. J.; Witter, R. C.; Wang, K.
2013-12-01
The October 28, 2012 Mw 7.8 Haida Gwaii, British Columbia, earthquake was the second largest historical earthquake recorded in Canada. Earthquake seismology and GPS geodesy shows this was an underthrusting event, in agreement with prior studies that indicated oblique underthrusting of the Haida Gwaii by the Pacific plate. Coseismic deformation is poorly constrained by geodesy, with only six GPS sites and two tide gauge stations anywhere near the rupture area. In order to better constrain the coseismic deformation, we measured the upper limit of sessile intertidal organisms at 26 sites relative to sea level. We dominantly measured the positions of bladder weed (fucus distichus - 617 observations) and the common acorn barnacle (Balanus balanoides - 686 observations). Physical conditions control the upper limit of sessile intertidal organisms, so we tried to find the quietest water conditions, with steep, but not overhanging faces, where slosh from wave motion was minimized. We focused on the western side of the islands as rupture models indicated that the greatest displacement was there. However, we were also looking for calm water sites in bays located as close as possible to the often tumultuous Pacific Ocean. In addition, we made 322 measurements of sea level that will be used to develop a precise tidal model and to evaluate the position of the organisms with respect to a common sea level datum. We anticipate the resolution of the method will be about 20-30 cm. The sites were focused on the western side of the Haida Gwaii from Wells Bay on the south up to Otard Bay to the north, with 5 transects across strike. We also collected data at the town of Masset, which lies outside of the deformation zone of the earthquake. We observed dried and desiccated bands of fucus and barnacles at two sites on the western coast of southern Moresby Island (Gowgia Bay and Wells Bay). Gowgia Bay had the strongest evidence of uplift with fucus that was dried out and apparently dead. A
Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting
NASA Astrophysics Data System (ADS)
Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie
2014-05-01
Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.
Locating earthquakes in west Texas oil fields using 3-D anisotropic velocity models
Hua, Fa; Doser, D.; Baker, M. . Dept. of Geological Sciences)
1993-02-01
Earthquakes within the War-Wink gas field, Ward County, Texas, that have been located with a 1-D velocity model occur near the edges and top of a naturally occurring overpressured zone. Because the War-Wink field is a structurally controlled anticline with significant velocity anisotropy associated with the overpressured zone and finely layered evaporites, the authors have attempted to re-locate earthquakes using a 3-D anisotropic velocity model. Preliminary results with this model give the unsatisfactory result that many earthquakes previously located at the top of the overpressured zone (3-3.5 km) moved into the evaporites (1-1.5 km) above the field. They believe that this result could be caused by: (1) aliasing the velocity model; or (2) problems in determining the correct location minima when several minima exist. They are currently attempting to determine which of these causes is more likely for the unsatisfactory result observed.
Modelling psychological responses to the Great East Japan earthquake and nuclear incident.
Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O
2012-01-01
The Great East Japan (Tōhoku/Kanto) earthquake of March 2011 was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11-13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events.
Modelling Psychological Responses to the Great East Japan Earthquake and Nuclear Incident
Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.
2012-01-01
The Great East Japan (Tōhoku/Kanto) earthquake of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380
An exact renormalization model for earthquakes and material failure: Statics and dynamics
Newman, W.I. |; Gabrielov, A.M. |; Durand, T.A.; Phoenix, S.L.; Turcotte, D.L.
1993-09-12
Earthquake events are well-known to prams a variety of empirical scaling laws. Accordingly, renormalization methods offer some hope for understanding why earthquake statistics behave in a similar way over orders of magnitude of energy. We review the progress made in the use of renormalization methods in approaching the earthquake problem. In particular, earthquake events have been modeled by previous investigators as hierarchically organized bundles of fibers with equal load sharing. We consider by computational and analytic means the failure properties of such bundles of fibers, a problem that may be treated exactly by renormalization methods. We show, independent of the specific properties of an individual fiber, that the stress and time thresholds for failure of fiber bundles obey universal, albeit different, staling laws with respect to the size of the bundles. The application of these results to fracture processes in earthquake events and in engineering materials helps to provide insight into some of the observed patterns and scaling-in particular, the apparent weakening of earthquake faults and composite materials with respect to size, and the apparent emergence of relatively well-defined stresses and times when failure is seemingly assured.
Nonconservative current-driven dynamics: beyond the nanoscale
Todorov, Tchavdar N; Dundas, Daniel
2015-01-01
Summary Long metallic nanowires combine crucial factors for nonconservative current-driven atomic motion. These systems have degenerate vibrational frequencies, clustered about a Kohn anomaly in the dispersion relation, that can couple under current to form nonequilibrium modes of motion growing exponentially in time. Such motion is made possible by nonconservative current-induced forces on atoms, and we refer to it generically as the waterwheel effect. Here the connection between the waterwheel effect and the stimulated directional emission of phonons propagating along the electron flow is discussed in an intuitive manner. Nonadiabatic molecular dynamics show that waterwheel modes self-regulate by reducing the current and by populating modes in nearby frequency, leading to a dynamical steady state in which nonconservative forces are counter-balanced by the electronic friction. The waterwheel effect can be described by an appropriate effective nonequilibrium dynamical response matrix. We show that the current-induced parts of this matrix in metallic systems are long-ranged, especially at low bias. This nonlocality is essential for the characterisation of nonconservative atomic dynamics under current beyond the nanoscale. PMID:26665086
NASA Astrophysics Data System (ADS)
Ryan, Kenny
Earthquakes and their corresponding tsunamis pose significant hazard to popu- lated regions around the world. Therefore, it is critically important that we seek to more fully understand the physics of the combined earthquake-tsunami system. One way to address this goal is through numerical modeling. The work discussed herein focuses on combining dynamic earthquake and tsunami models through the use of the Finite Element Method (FEM) and the Finite Difference Method (FDM). Dynamic earthquake models ac- count for the force that the entire fault system exerts on each individual element of the model for each time step, so that earthquake rupture takes a path based on the physics of the model; dynamic tsunami models can incorporate water height variations to produce water wave formation, propagation, and inundation. Chapter 1 provides an introduction to some important concepts and equations of elastodynamics and fluid dynamics as well as a brief example of the FEM. In Chapter 2, we investigate the 3-D effects of realistic fault dynamics on slip, free surface deformation, and resulting tsunami formation from an Mw 9 megathrust earthquake offshore Southern Alaska. Corresponding tsunami models, which use a FDM to solve linear long-wave equations, match sea floor deformation, in time, to the free surface deformation from the rupture simulations. Tsunamis generated in this region could have large adverse effects on Pacific coasts. In Chapter 3, we construct a 3-D dynamic rupture model of an earthquake on a reverse fault structure offshore Southern California to model the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. The corresponding tsunami model uses final seafloor displacements from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. In Chapter 4, we begin to evaluate 2-D earthquake source parameters
NASA Astrophysics Data System (ADS)
Rong, Y.; Bird, P.; Jackson, D. D.
2016-04-01
The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two
NASA Astrophysics Data System (ADS)
Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray
2013-04-01
Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
NASA Astrophysics Data System (ADS)
Iwata, T.
2014-12-01
In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the
NASA Astrophysics Data System (ADS)
Fielding, E. J.; Sladen, A.; Simons, M.; Rosen, P. A.; Yun, S.; Li, Z.; Avouac, J.; Leprince, S.
2010-12-01
Earthquake responders need to know where the earthquake has caused damage and what is the likely intensity of damage. The earliest information comes from global and regional seismic networks, which provide the magnitude and locations of the main earthquake hypocenter and moment tensor centroid and also the locations of aftershocks. Location accuracy depends on the availability of seismic data close to the earthquake source. Finite fault models of the earthquake slip can be derived from analysis of seismic waveforms alone, but the results can have large errors in the location of the fault ruptures and spatial distribution of slip, which are critical for estimating the distribution of shaking and damage. Geodetic measurements of ground displacements with GPS, LiDAR, or radar and optical imagery provide key spatial constraints on the location of the fault ruptures and distribution of slip. Here we describe the analysis of interferometric synthetic aperture radar (InSAR) and sub-pixel correlation (or pixel offset tracking) of radar and optical imagery to measure ground coseismic displacements for recent large earthquakes, and lessons learned for rapid assessment of future events. These geodetic imaging techniques have been applied to the 2010 Leogane, Haiti; 2010 Maule, Chile; 2010 Baja California, Mexico; 2008 Wenchuan, China; 2007 Tocopilla, Chile; 2007 Pisco, Peru; 2005 Kashmir; and 2003 Bam, Iran earthquakes, using data from ESA Envisat ASAR, JAXA ALOS PALSAR, NASA Terra ASTER and CNES SPOT5 satellite instruments and the NASA/JPL UAVSAR airborne system. For these events, the geodetic data provided unique information on the location of the fault or faults that ruptured and the distribution of slip that was not available from the seismic data and allowed the creation of accurate finite fault source models. In many of these cases, the fault ruptures were on previously unknown faults or faults not believed to be at high risk of earthquakes, so the area and degree of
Interevent times in a new alarm-based earthquake forecasting model
NASA Astrophysics Data System (ADS)
Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed
2013-09-01
This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.
2015-12-01
We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.
Family number non-conservation induced by the supersymmetric mixing of scalar leptons
Levine, M.J.S.
1987-08-01
The most egregious aspect of (N = 1) supersymmetric theories is that each particle state is accompanied by a 'super-partner', a state with identical quantum numbers save that it differs in spin by one half unit. For the leptons these are scalars and are called ''sleptons'', or scalar leptons. These consist of the charged sleptons (selectron, smuon, stau) and the scalar neutrinos ('sneutrinos'). We examine a model of supersymmetry with soft breaking terms in the electroweak sector. Explicit mixing among the scalar leptons results in a number of effects, principally non-conservation of lepton family number. Comparison with experiment permits us to place constraints upon the model. 49 refs., 12 figs.
Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide
Miles, Scott B.; Keefer, David K.
2007-01-01
This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.
Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models
NASA Astrophysics Data System (ADS)
Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.
2012-04-01
The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
How Fault Geometry Affects Dynamic Rupture Models of Earthquakes in San Gorgonio Pass, CA
NASA Astrophysics Data System (ADS)
Tarnowski, J. M.; Oglesby, D. D.; Cooke, M. L.; Kyriakopoulos, C.
2015-12-01
We use 3D dynamic finite element models to investigate potential rupture paths of earthquakes propagating along faults in the western San Gorgonio Pass (SGP) region of California. The SGP is a structurally complex area along the southern California portion of the San Andreas fault system (SAF). It has long been suspected that this structural knot, which consists of the intersection of various non-planar strike-slip and thrust fault segments, may inhibit earthquake rupture propagation between the San Bernardino and Banning strands of the SAF. The above condition may limit the size of potential earthquakes in the region. Our focus is on the San Bernardino strand of the SAF and the San Gorgonio Pass Fault zone, where the fault connectivity is not well constrained. We use the finite element code FaultMod (Barall, 2009) to investigate how fault connectivity, nucleation location, and initial stresses influence rupture propagation and ground motion, including the likelihood of through-going rupture in this region. Preliminary models indicate that earthquakes that nucleate on the San Bernardino strand and propagate southward do not easily transfer rupture to the thrust faults of the San Gorgonio Pass fault zone. However, under certain assumptions, earthquakes that nucleate along the San Gorgonio Pass fault zone can transfer rupture to the San Bernardino strand.
NASA Astrophysics Data System (ADS)
Sobolev, Stephan; Muldashev, Iskander
2016-04-01
According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.
Rupture models for the A.D. 900-930 Seattle fault earthquake from uplifted shorelines
ten Brink, U.S.; Song, J.; Bucknam, R.C.
2006-01-01
A major earthquake on the Seattle fault, Washington, ca. A.D. 900-930 was first inferred from uplifted shorelines and tsunami deposits. Despite follow-up geophysical and geological investigations, the rupture parameters of the earthquake and the geometry of the fault are uncertain. Here we estimate the fault geometry, slip direction, and magnitude of the earthquake by modeling shoreline elevation change. The best fitting model geometry is a reverse fault with a shallow roof ramp consisting of at least two back thrusts. The best fitting rupture is a SW-NE ohlique reverse slip with horizontal shortening of 15 m, rupture depth of 12.5 km, and magnitude Mw = 7.5. ?? 2006 Geological Society of America.
GEM1: First-year modeling and IT activities for the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Anderson, G.; Giardini, D.; Wiemer, S.
2009-04-01
GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global
Bakun, W.H.; Scotti, O.
2006-01-01
Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for
Hasumi, Tomohiro
2008-11-13
We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.
NASA Astrophysics Data System (ADS)
Rotondi, Renata; Varini, Elisa
2016-04-01
The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.
NASA Astrophysics Data System (ADS)
Oth, Adrien; Wenzel, Friedemann; Radulian, Mircea
2007-06-01
Several source parameters (source dimensions, slip, particle velocity, static and dynamic stress drop) are determined for the moderate-size October 27th, 2004 ( MW = 5.8), and the large August 30th, 1986 ( MW = 7.1) and March 4th, 1977 ( MW = 7.4) Vrancea (Romania) intermediate-depth earthquakes. For this purpose, the empirical Green's functions method of Irikura [e.g. Irikura, K. (1983). Semi-Empirical Estimation of Strong Ground Motions during Large Earthquakes. Bull. Dis. Prev. Res. Inst., Kyoto Univ., 33, Part 2, No. 298, 63-104., Irikura, K. (1986). Prediction of strong acceleration motions using empirical Green's function, in Proceedings of the 7th Japan earthquake engineering symposium, 151-156., Irikura, K. (1999). Techniques for the simulation of strong ground motion and deterministic seismic hazard analysis, in Proceedings of the advanced study course seismotectonic and microzonation techniques in earthquake engineering: integrated training in earthquake risk reduction practices, Kefallinia, 453-554.] is used to generate synthetic time series from recordings of smaller events (with 4 ≤ MW ≤ 5) in order to estimate several parameters characterizing the so-called strong motion generation area, which is defined as an extended area with homogeneous slip and rise time and, for crustal earthquakes, corresponds to an asperity of about 100 bar stress release [Miyake, H., T. Iwata and K. Irikura (2003). Source characterization for broadband ground-motion simulation: Kinematic heterogeneous source model and strong motion generation area. Bull. Seism. Soc. Am., 93, 2531-2545.] The parameters are obtained by acceleration envelope and displacement waveform inversion for the 2004 and 1986 events and MSK intensity pattern inversion for the 1977 event using a genetic algorithm. The strong motion recordings of the analyzed Vrancea earthquakes as well as the MSK intensity pattern of the 1977 earthquake can be well reproduced using relatively small strong motion
NASA Astrophysics Data System (ADS)
So, E.
2010-12-01
Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from
Forecast model for great earthquakes at the Nankai Trough subduction zone
Stuart, W.D.
1988-01-01
An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.
Interoccurrence time statistics in the two-dimensional Burridge-Knopoff earthquake model
Hasumi, Tomohiro
2007-08-15
We have numerically investigated statistical properties of the so-called interoccurrence time or the waiting time, i.e., the time interval between successive earthquakes, based on the two-dimensional (2D) spring-block (Burridge-Knopoff) model, selecting the velocity-weakening property as the constitutive friction law. The statistical properties of frequency distribution and the cumulative distribution of the interoccurrence time are discussed by tuning the dynamical parameters, namely, a stiffness and frictional property of a fault. We optimize these model parameters to reproduce the interoccurrence time statistics in nature; the frequency and cumulative distribution can be described by the power law and Zipf-Mandelbrot type power law, respectively. In an optimal case, the b value of the Gutenberg-Richter law and the ratio of wave propagation velocity are in agreement with those derived from real earthquakes. As the threshold of magnitude is increased, the interoccurrence time distribution tends to follow an exponential distribution. Hence it is suggested that a temporal sequence of earthquakes, aside from small-magnitude events, is a Poisson process, which is observed in nature. We found that the interoccurrence time statistics derived from the 2D BK (original) model can efficiently reproduce that of real earthquakes, so that the model can be recognized as a realistic one in view of interoccurrence time statistics.
Neural network models for earthquake magnitude prediction using multiple seismicity indicators.
Panakkat, Ashif; Adeli, Hojjat
2007-02-01
Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.
Steady-state statistical mechanics of model and real earthquakes (Invited)
NASA Astrophysics Data System (ADS)
Main, I. G.; Naylor, M.
2010-12-01
We derive an analytical expression for entropy production in earthquake populations based on Dewar’s formulation, including flux (tectonic forcing) and source (earthquake population) terms, and apply it to the Olami-Feder-Christensen (OFC) numerical model for earthquake dynamics. Assuming the commonly-observed power-law rheology between driving stress and remote strain rate, we test the hypothesis that maximum entropy production (MEP) is a thermodynamic driver for self-organized ‘criticality’ (SOC) in the model. MEP occurs when the global elastic strain is near, but strictly sub-critical, with small relative fluctuations in macroscopic strain energy expressed by a low seismic efficiency, and broad-bandwidth power-law scaling of frequency and rupture area. These phenomena, all as observed in natural earthquake populations, are hallmarks of the broad conceptual definition of SOC, which to date has often in practice included self-organizing systems in a near but strictly sub-critical state. In contrast the precise critical point represents a state of minimum entropy production in the model. In the MEP state the strain field retains some memory of past events, expressed as coherent ‘domains’, implying a degree of predictability, albeit strongly limited in practice by the proximity to criticality, our inability to map the stress field at an equivalent resolution to the numerical model, and finite temporal sampling effects in real data.
Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand
NASA Astrophysics Data System (ADS)
Francois-Holden, C.; Zhao, J.
2012-12-01
The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground
A model for earthquakes near Palisades Reservoir, southeast Idaho
Schleicher, David
1975-01-01
The Palisades Reservoir seems to be triggering earthquakes: epicenters are concentrated near the reservoir, and quakes are concentrated in spring, when the reservoir level is highest or is rising most rapidly, and in fall, when the level is lowest. Both spring and fall quakes appear to be triggered by minor local stresses superposed on regional tectonic stresses; faulting is postulated to occur when the effective normal stress across a fault is decreased by a local increase in pore-fluid pressure. The spring quakes tend to occur when the reservoir level suddenly rises: increased pore pressure pushes apart the walls of the graben flooded by the reservoir, thus decreasing the effective normal stress across faults in the graben. The fall quakes tend to occur when the reservoir level is lowest: water that gradually infiltrated poorly permeable (fault-gouge?) zones during high reservoir stands is then under anomalously high pressure, which decreases the effective normal stress across faults in the poorly permeable zones.
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
Viscoelastic shear zone model of a strike-slip earthquake cycle
Pollitz, F.F.
2001-01-01
I examine the behavior of a two-dimensional (2-D) strike-slip fault system embedded in a 1-D elastic layer (schizosphere) overlying a uniform viscoelastic half-space (plastosphere) and within the boundaries of a finite width shear zone. The viscoelastic coupling model of Savage and Prescott [1978] considers the viscoelastic response of this system, in the absence of the shear zone boundaries, to an earthquake occurring within the upper elastic layer, steady slip beneath a prescribed depth, and the superposition of the responses of multiple earthquakes with characteristic slip occurring at regular intervals. So formulated, the viscoelastic coupling model predicts that sufficiently long after initiation of the system, (1) average fault-parallel velocity at any point is the average slip rate of that side of the fault and (2) far-field velocities equal the same constant rate. Because of the sensitivity to the mechanical properties of the schizosphere-plastosphere system (i.e., elastic layer thickness, plastosphere viscosity), this model has been used to infer such properties from measurements of interseismic velocity. Such inferences exploit the predicted behavior at a known time within the earthquake cycle. By modifying the viscoelastic coupling model to satisfy the additional constraint that the absolute velocity at prescribed shear zone boundaries is constant, I find that even though the time-averaged behavior remains the same, the spatiotemporal pattern of surface deformation (particularly its temporal variation within an earthquake cycle) is markedly different from that predicted by the conventional viscoelastic coupling model. These differences are magnified as plastosphere viscosity is reduced or as the recurrence interval of periodic earthquakes is lengthened. Application to the interseismic velocity field along the Mojave section of the San Andreas fault suggests that the region behaves mechanically like a ???600-km-wide shear zone accommodating 50 mm/yr fault
A friction to flow constitutive law and its application to a 2-D modeling of earthquakes
NASA Astrophysics Data System (ADS)
Shimamoto, Toshihiko; Noda, Hiroyuki
2014-11-01
Establishment of a constitutive law from friction to high-temperature plastic flow has long been a challenging task for solving problems such as modeling earthquakes and plate interactions. Here we propose an empirical constitutive law that describes this transitional behavior using only friction and flow parameters, with good agreements with experimental data on halite shear zones. The law predicts steady state and transient behaviors, including the dependence of the shear resistance of fault on slip rate, effective normal stress, and temperature. It also predicts a change in velocity weakening to velocity strengthening with increasing temperature, similar to the changes recognized for quartz and granite gouge under hydrothermal conditions. A slight deviation from the steady state friction law due to the involvement of plastic deformation can cause a large change in the velocity dependence. We solved seismic cycles of a fault across the lithosphere with the law using a 2-D spectral boundary integral equation method, revealing dynamic rupture extending into the aseismic zone and rich evolution of interseismic creep including slow slip prior to earthquakes. Seismic slip followed by creep is consistent with natural pseudotachylytes overprinted with mylonitic deformation. Overall fault behaviors during earthquake cycles are insensitive to transient flow parameters. The friction-to-flow law merges "Christmas tree" strength profiles of the lithosphere and rate dependency fault models used for earthquake modeling on a unified basis. Strength profiles were drawn assuming a strain rate for the flow regime, but we emphasize that stress distribution evolves reflecting the fault behavior. A fault zone model was updated based on the earthquake modeling.
NASA Astrophysics Data System (ADS)
Ke, M. C.
2015-12-01
Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.
NASA Astrophysics Data System (ADS)
Larroque, C.; Scotti, O.; Ioualalen, M.; Hassoun, V.; Migeon, S.
2012-04-01
Early in the morning, of February 23, 1887 a major damaging earthquake hit the towns along the Italian and French Riviera. The earthquake was followed by a tsunami wave with a maximum runup of 2 m near Imperia. At least 600 hundred people died, mainly due to building collapses. The "Ligurian earthquake" occurred at the junction between the Southern French-Italian Alps and the Ligurian Basin in the western Mediterranean. For such historical event, the epicentre and the equivalent magnitude are difficult to characterize with a high degree of precision, and the tectonic fault responsible for the earthquake is still debated today. The recent MALISAR marine geophysical survey allowed identifying a set of N60-70°E recent scarps at the foot of the northern Ligurian margin, revealing a large system of active faults. The scarps correspond to cumulative reverse faulting, with a minor strike-slip component, consistent with the present-day kinematics of earthquakes. Since we have also identified submarine failures in the time-range of the Ligurian earthquake we addressed the question of the submarine slide-induced tsunami. Nevertheless, the maximum volume involved by these submarine slides was in the range of 0.005 km3. Such a volume appears too small to trigger a tsunami with the observed extent and run-up characteristics. Therefore, we propose that the rupture of fault segments belonging to the 80 km-long northern Ligurian Faults system is the source of the 1887 Ligurian earthquake. We investigate the macroseismic data from the historical databases SISFRANCE-08 and DBMI-04 using several models of intensity attenuation with distance and focal depth. Modelling results are consistent with the location offshore, indicating an epicentre around 43.70°-43.78°N and 7.81°-8.07°E with a magnitude Mw in the range of 6.3-7.5. A refinement of this range of magnitude is discussed in the light of the tsunami modelling. Numerous earthquake sources scenarios have been tested with
3D Spontaneous Rupture Models of Large Earthquakes on the Hayward Fault, California
NASA Astrophysics Data System (ADS)
Barall, M.; Harris, R. A.; Simpson, R. W.
2008-12-01
We are constructing 3D spontaneous rupture computer simulations of large earthquakes on the Hayward and central Calaveras faults. The Hayward fault has a geologic history of producing many large earthquakes (Lienkaemper and Williams, 2007), with its most recent large event a M6.8 earthquake in 1868. Future large earthquakes on the Hayward fault are not only possible, but probable (WGCEP, 2008). Our numerical simulation efforts use information about the complex 3D fault geometry of the Hayward and Calaveras faults and information about the geology and physical properties of the rocks that surround the Hayward and Calaveras faults (Graymer et al., 2005). Initial stresses on the fault surface are inferred from geodetic observations (Schmidt et al., 2005), seismological studies (Hardebeck and Aron, 2008), and from rate-and- state simulations of the interseismic interval (Stuart et al., 2008). In addition, friction properties on the fault surface are inferred from laboratory measurements of adjacent rock types (Morrow et al., 2008). We incorporate these details into forward 3D computer simulations of dynamic rupture propagation, using the FaultMod finite-element code (Barall, 2008). The 3D fault geometry is constructed using a mesh-morphing technique, which starts with a vertical planar fault and then distorts the entire mesh to produce the desired fault geometry. We also employ a grid-doubling technique to create a variable-resolution mesh, with the smallest elements located in a thin layer surrounding the fault surface, which provides the higher resolution needed to model the frictional behavior of the fault. Our goals are to constrain estimates of the lateral and depth extent of future large Hayward earthquakes, and to explore how the behavior of large earthquakes may be affected by interseismic stress accumulation and aseismic slip.
Thurber, C.; Zhang, H.; Waldhauser, F.; Hardebeck, J.; Michael, A.; Eberhart-Phillips, D.
2006-01-01
We present a new three-dimensional (3D) compressional vvavespeed (V p) model for the Parkfield region, taking advantage of the recent seismicity associated with the 2003 San Simeon and 2004 Parkfield earthquake sequences to provide increased model resolution compared to the work of Eberhart-Phillips and Michael (1993) (EPM93). Taking the EPM93 3D model as our starting model, we invert the arrival-time data from about 2100 earthquakes and 250 shots recorded on both permanent network and temporary stations in a region 130 km northeast-southwest by 120 km northwest-southeast. We include catalog picks and cross-correlation and catalog differential times in the inversion, using the double-difference tomography method of Zhang and Thurber (2003). The principal Vp features reported by EPM93 and Michelini and McEvilly (1991) are recovered, but with locally improved resolution along the San Andreas Fault (SAF) and near the active-source profiles. We image the previously identified strong wavespeed contrast (faster on the southwest side) across most of the length of the SAF, and we also improve the image of a high Vp body on the northeast side of the fault reported by EPM93. This narrow body is at about 5- to 12-km depth and extends approximately from the locked section of the SAP to the town of Parkfield. The footwall of the thrust fault responsible for the 1983 Coalinga earthquake is imaged as a northeast-dipping high wavespeed body. In between, relatively low wavespeeds (<5 km/sec) extend to as much as 10-km depth. We use this model to derive absolute locations for about 16,000 earthquakes from 1966 to 2005 and high-precision double-difference locations for 9,000 earthquakes from 1984 to 2005, and also to determine focal mechanisms for 446 earthquakes. These earthquake locations and mechanisms show that the seismogenic fault is a simple planar structure. The aftershock sequence of the 2004 mainshock concentrates into the same structures defined by the pre-2004 seismicity
NASA Astrophysics Data System (ADS)
Kawada, Y.; Nagahama, H.; Omori, Y.; Yasuoka, Y.; Shinogi, M.
2006-12-01
Accelerated moment release is often preceded by large earthquakes, and defined by rate of cumulative Benioff strain following power-law time-to-failure relation. This temporal seismicity pattern is investigated in terms of irreversible thermodynamics model. The model is regulated by the Helmholtz free energy defined by the macroscopic stress-strain relation and internal state variables (generalized coordinates). Damage and damage evolution are represented by the internal state variables. In the condition, huge number of the internal state variables has each specific relaxation time, while a set of the time evolution shows a temporal power-law behavior. The irreversible thermodynamic model reduces to a fiber-bundle model and experimentally-based constitutive law of rocks, and predicts the form of accelerated moment release. Based on the model, we can also discuss the increase in atmospheric radon concentration prior to the 1995 Kobe earthquake.
Visualizing the 2009 Samoan and Sumatran Earthquakes using Google Earth-based COLLADA models
NASA Astrophysics Data System (ADS)
de Paor, D. G.; Brooks, W. D.; Dordevic, M.; Ranasinghe, N. R.; Wild, S. C.
2009-12-01
Earthquake hazards are generally analyzed by a combination of graphical focal mechanism or centroid moment tensor solutions (aka geophysical beach balls), contoured fault plane maps, and shake maps or tsunami damage surveys. In regions of complex micro-plate tectonics, it can be difficult to visualize spatial and temporal relations among earthquakes, aftershocks, and associated tectonic and volcanic structures using two-dimensional maps and cross sections alone. Developing the techniques originally described by D.G. De Paor & N.R. Williams (EOS Trans. AGU S53E-05, 2006), we can view the plate tectonic setting, geophysical parameters, and societal consequences of the 2009 Samoan and Sumatran earthquakes on the Google Earth virtual globe. We use xml-based COLLADA models to represent the subsurface structure and standard KML to overlay map data on the digital terrain model. Unlike traditional geophysical beach ball figures, our models are three dimensional and located at correct depth, and they optionally show nodal planes which are useful in relating the orientation of one earthquake to the hypo-centers of its neighbors. With the aid of the new Google Earth application program interface (GE API), we can use web page-based Javascript controls to lift structural models from the subsurface in Google Earth and generate serial sections along strike. Finally, we use the built-in features of the Google Earth web browser plug-in to create a virtual tour of damage sites with hyperlinks to web-based field reports. These virtual globe visualizations may help complement existing KML and HTML resources of the USGS Earthquake Hazards Program and The Global CMT Project.
NASA Astrophysics Data System (ADS)
Dempsey, David; Suckale, Jenny
2016-05-01
Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
Investigations of Earthquake Source Processes Based on Fault Models with Variable Friction Rheology
NASA Astrophysics Data System (ADS)
Kaneko, Yoshihiro
Ample experimental and observational evidence suggests that friction properties on natural faults vary spatially. In the lab, rock friction depends on temperature and confining pressure and it can be either velocity weakening or velocity strengthening, leading to either unstable or stable slip. Such variations in friction rheology can explain patterns of seismic and aseismic fault slip inferred from field observations. This thesis studies earthquake source processes using models with relatively simple but conceptually important patterns of velocity-weakening and velocity-strengthening friction that can arise on natural faults. Based on numerical and analytical modeling, we explore the consequences of such patterns for earthquake sequences, interseismic coupling, earthquake nucleation processes, aftershock occurrence, peak ground motion in the vicinity of active faults, and seismic slip budget at shallow depths. The velocity-dependence of friction is embedded into the framework of logarithmic rate and state friction laws. In addition to using existing boundary integral methods, which are accurate and efficient in simulating slip on planar faults embedded in homogeneous elastic media, the thesis develops spectral element methods to consider single dynamic ruptures and long-term histories of seismic and aseismic slip in models with layered bulk properties. The results of this thesis help to understand a number of observed fault slip phenomena, such as variability in earthquake patterns and its relation to interseismic coupling, seismic quiescence following decay of aftershocks at inferred rheological transitions, instances of poor correlation between static stress changes and aftershock occurrence, the lack of universally observed supershear rupture near the free surface, and coseismic slip deficit of large strike-slip earthquakes at shallow depths. The models, approaches, and numerical methods developed in the thesis motivate and enable consideration of many other
Viscoelastic-coupling model for the earthquake cycle driven from below
Savage, J.C.
2000-01-01
In a linear system the earthquake cycle can be represented as the sum of a solution which reproduces the earthquake cycle itself (viscoelastic-coupling model) and a solution that provides the driving force. We consider two cases, one in which the earthquake cycle is driven by stresses transmitted along the schizosphere and a second in which the cycle is driven from below by stresses transmitted along the upper mantle (i.e., the schizosphere and upper mantle, respectively, act as stress guides in the lithosphere). In both cases the driving stress is attributed to steady motion of the stress guide, and the upper crust is assumed to be elastic. The surface deformation that accumulates during the interseismic interval depends solely upon the earthquake-cycle solution (viscoelastic-coupling model) not upon the driving source solution. Thus geodetic observations of interseismic deformation are insensitive to the source of the driving forces in a linear system. In particular, the suggestion of Bourne et al. [1998] that the deformation that accumulates across a transform fault system in the interseismic interval is a replica of the deformation that accumulates in the upper mantle during the same interval does not appear to be correct for linear systems.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Non-conservative evolution in Algols: where is the matter?
NASA Astrophysics Data System (ADS)
Deschamps, R.; Braun, K.; Jorissen, A.; Siess, L.; Baes, M.; Camps, P.
2015-05-01
Context. There is indirect evidence of non-conservative evolutions in Algols. However, the systemic mass-loss rate is poorly constrained by observations and generally set as a free parameter in binary-star evolution simulations. Moreover, systemic mass loss may lead to observational signatures that still need to be found. Aims: Within the "hotspot" ejection mechanism, some of the material that is initially transferred from the companion star via an accretion stream is expelled from the system due to the radiative energy released on the gainer's surface by the impacting material. The objective of this paper is to retrieve observable quantities from this process and to compare them with observations. Methods: We investigate the impact of the outflowing gas and the possible presence of dust grains on the spectral energy distribution (SED). We used the 1D plasma code Cloudy and compared the results with the 3D Monte-Carlo radiative transfer code Skirt for dusty simulations. The circumbinary mass-distribution and binary parameters were computed with state-of-the-art binary calculations done with the Binstar evolution code. Results: The outflowing material reduces the continuum flux level of the stellar SED in the optical and UV. Because of the time-dependence of this effect, it may help to distinguish between different ejection mechanisms. If present, dust leads to observable infrared excesses, even with low dust-to-gas ratios, and traces the cold material at large distances from the star. By searching for this dust emission in the WISE catalogue, we found a small number of Algols showing infrared excesses, among which the two rather surprising objects SX Aur and CZ Vel. We find that some binary B[e] stars show the same strong Balmer continuum as we predict with our models. However, direct evidence of systemic mass loss is probably not observable in genuine Algols, since these systems no longer eject mass through the hotspot mechanism. Furthermore, owing to its high
Source Model from ALOS-2 ScanSAR of the 2015 Nepal Earthquakes
NASA Astrophysics Data System (ADS)
Liu, Youtian; Ge, Linlin; Ng, Alex Hay-Man
2016-06-01
The 2015 Gorkha Nepal Earthquake sequence started with a magnitude Mw 7.8 main shock and continued with several large aftershocks, particularly the second major shock of Mw 7.3. Both earthquake events were captured using ALOS-2 ScanSAR images to determine the coseismic surface deformation and the source models. In this paper, the displacement maps were produced and the corresponding modelling results were discussed. The single fault model of the main shock suggests that there was nearly 6 m of right-lateral oblique slip motion with fault struck of 292° and dipped gently Northeast at 7°, indicating that the main shock was on a thrust fault. Moreover, a single fault model for the Mw 7.3 quake with striking of 312° and dipping of 11° was derived from observed result. Both results showed the fault planes struck generally to South and dipped northeast, which depicted the risks since the main shock occurred.
REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION
Walter, W R; Pasyanos, M E; Matzel, E; Gok, R; Sweeney, J; Ford, S R; Rodgers, A J
2008-07-08
We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing seismic
NASA Astrophysics Data System (ADS)
Beyreuther, Moritz; Carniel, Roberto; Wassermann, Joachim
2008-10-01
A possible interaction of (volcano-) tectonic earthquakes with the continuous seismic noise recorded in the volcanic island of Tenerife was recently suggested. Also recently the zone close to Las Canadas caldera shows unusual high number of near (< 25 km), possibly volcano-tectonic, earthquakes indicating signs of reawakening of the volcano putting high pressure on the risk analyst. Certainly for both tasks consistent earthquake catalogues provide valuable information and thus there is a strong demand for automatic detection and classification methodologies generating such catalogues. Therefore we adopt methodologies of speech recognition where statistical models, called Hidden Markov Models (HMMs), are widely used for spotting words in continuous audio data. In this study HMMs are used to detect and classify volcano-tectonic and/or tectonic earthquakes in continuous seismic data. Further the HMM detection and classification is evaluated and discussed for a one month period of continuous seismic data at a single seismic station. Being a stochastic process, HMMs provide the possibility to add a confidence measure to each classification made, basically evaluating how "sure" the algorithm is when classifying a certain earthquake. Moreover, this provides helpful information for the seismological analyst when cataloguing earthquakes. Combined with the confidence measure the HMM detection and classification can provide precise enough earthquake statistics, both for further evidence on the interaction between seismic noise and (volcano-) tectonic earthquakes as well as for incorporation in an automatic early warning system.
Real time forecasts through physical and stochastic models of earthquake clustering
NASA Astrophysics Data System (ADS)
Murru, M.; Console, R.; Catalli, F.; Falcone, G.
2005-12-01
The phenomenon of earthquake interaction has become a popular subject of study because it can shed light on the physical processes leading to earthquakes, and because it has a potential value for short-term earthquake forecast and hazard mitigation. In this study we start from a purely stochastic approach known as the so-called epidemic model (ETAS) introduced by Ogata in 1988 and its variations. Then we build up an approach by which this model and the rate-and-state constitutive law introduced by Dieterich in the `90s have been merged in a single algorithm and statistically tested. Tests on real seismicity and comparison with a plain time-independent Poissonian model through likelihood-based methods have reliably proved their validity. The models are suitable for real-time forecast of the seismic activity. In the context of the low-magnitude Italian seismicity recorded from 1987 to 2005, the new model incorporating the physical concept of the rate-and-state theory performs not better than the purely stochastic model. Nevertheless, it has the advantage of needing a smaller number of free parameters and providing new interesting insights on the physics of the seismogenic process.
Evaluation of CAMEL - comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
A new comprehensive areal model of earthquake-induced landslides (CAMEL) has been developed to assist in planning decisions related to disaster risk reduction. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using fuzzy logic systems and geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL has been empirically evaluated with respect to disrupted landslides (Category I) using a case study of the 1989 M = 6.9 Loma Prieta, CA earthquake. In this case, CAMEL performs best in comparison to disrupted slides and falls in soil. For disrupted rock fall and slides, CAMEL's performance was slightly poorer. The model predicted a low occurrence of rock avalanches, when none in fact occurred. A similar comparison with the Loma Prieta case study was also conducted using a simplified Newmark displacement model. The area under the curve method of evaluation was used in order to draw comparisons between both models, revealing improved performance with CAMEL. CAMEL should not however be viewed as a strict alternative to Newmark displacement models. CAMEL can be used to integrate Newmark displacements with other, previously incompatible, types of knowledge. ?? 2008 Elsevier B.V.
Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions
NASA Astrophysics Data System (ADS)
Kim, A.; Dreger, D.; Larsen, S.
2008-12-01
We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0
NASA Astrophysics Data System (ADS)
Shibazaki, B.; Ito, Y.; Ujiie, K.
2010-12-01
Recent observations reveal that very-low-frequency (VLF) earthquakes occur in the shallow subduction zones in the Nankai trough, Hyuganada, and off the coast of Tokachi, Japan (Obara and Ito, 2005; Asano et al., 2008; Obana and Kodaira, 2009). The ongoing super drilling project, Nankai Trough Seismogenic Zone Experiment (NanTroSEIZE), involves sampling the core of seismogenic faults and conducting analyses, experiments, and in-situ borehole measurements at the Nankai trough where VLF earthquakes occur. The data obtained in this project will be used to develop a model of VLF earthquakes that integrates seismological observations, laboratory experimental results, and geological observations. In the present study, first, we perform 2D quasi-dynamic modeling of VLF earthquakes in an elastic half-space on the basis of a rate- and state-dependent friction law. We set a local unstable zone in a shallow stable zone. To explain very low stress drops and short recurrence intervals of VLF earthquakes, the effective stress is assumed to be around 0.2 MPa. The results indicate that VLF earthquakes are unstable slips that occur under high pore pressure conditions. The probable causes for the high pore pressure along the faults of VLF earthquakes are the sediment compaction and dehydration that occur during smectite-to-illite transition in the shallow subduction zone. Then, we model the generation process of VLF earthquakes by considering splay faults and the occurrences of large subduction earthquakes. We set the local unstable zones with high pore pressure in the stable splay fault zones. We assume the long-term average slip velocity of the splay faults, and that the shear stress is accumulated by the delay of the fault slip from the long-term slip motion. Depending on the frictional properties of the shallow splay faults, two types of VLF earthquakes can occur. When the effective stress is low all over the splay faults, the rupture of large earthquakes propagates to the
Source model and ground shaking of the 2015 Gorkha, Nepal Mw7.8 earthquake
NASA Astrophysics Data System (ADS)
Wei, S.; Wang, T.; Lindsey, E. O.; Avouac, J. P.; Graves, R. W.; Hubbard, J.; Hill, E.; Barbot, S.; Tapponnier, P.; Karakas, C.; Helmberger, D. V.
2015-12-01
The 2015 Mw7.8 Gorkha, Nepal earthquake ruptured a previously locked portion of the Main Himalayan Thrust fault (MHT) that has not slipped in a large event since 1833 (Mw7.6). The earthquake was well recorded by geodetic (SAR, InSAR and GPS) and seismic instruments. In particular, high-rate (5Hz) GPS channels provide waveform records at local distances, with three stations located directly above the major asperities of the earthquake. Here we derive a kinematic rupture model of the earthquake by jointly inverting the seismic and geodetic data, using a Kostrov source time function with variable rise times. Our inversion result indicates that the earthquake had a weak initiation and ruptured unilaterally along strike towards the ESE, with an average rupture speed of 3.0km/s and total duration of ~60s. The preferred slip rate of the beginning portion of the rupture had a longer rise time compared with the strongest ruptures, which took place at ~22s and ~35s after the origin, located 30km to the northwest and 20km to the east of the Kathmandu valley, respectively. The horizontal vibration and amplification of ground shaking in the valley was well recorded by one of the GPS stations (NAST) and the accelerometric station (KANTP), with a dominant frequency of 0.25Hz. A simplified basin model with top shear wave speed of 250 m/s and geometry constrained by a previous micro-tremor study can largely explain the amplification and vibration, realized by 3D staggered-grid finite difference simulations. This study shows that ground shaking can be strongly affected by complexities of the rupture and velocity structure.
Fault modeling of the 2012 Wutai, Taiwan earthquake and its tectonic implications
NASA Astrophysics Data System (ADS)
Chiang, Pan-Hsin; Hsu, Ya-Ju; Chang, Wu-Lung
2016-01-01
The Mw 5.9 Wutai earthquake of 26 February 2012 occurred at a depth of 26 km in southern Taiwan, where the rupture is not related to any known geologic structures. To illustrate the rupture source of the mainshock, we employ an elastic half-space model and GPS coseismic displacements to invert for optimal fault geometry and coseismic slip distribution. With observations of both coseismic horizontal and vertical displacements less than 10 mm, our preferred fault model strikes 312° and dips 30° to the northeast and exhibits a reverse slip of 28-112 mm and left-lateral slip of 9-45 mm. Estimated geodetic moment of the Wutai earthquake is 1.3 × 1018 N-m, equivalent to an Mw 6.0 earthquake. The Wutai epicentral area is characterized by a NE-SW compression as evidenced by the slaty cleavage orientations and the interpretation of stress tensor inversion of earthquake focal mechanisms. Using the stress drops of the Wutai and the nearby 2010 Mw 6.4 Jiashian earthquakes, we obtain a lower bound of ~ 0.002 for the coefficient of friction on the fault. On the other hand, studying the crustal thickness contrast in southern Taiwan provides an upper bound of the average horizontal compressive force of 1.67 × 1012 N/m transmitted through the Taiwan mountain belt and gives an estimate of the maximum friction coefficient for 0.03. The deviation of an order of magnitude difference between the upper and lower bounds for the coefficient of friction suggests that other fault systems may support substantial differential stress in the lithosphere as well.
NASA Astrophysics Data System (ADS)
Cavers, M. S.; Vasudevan, K.
2015-10-01
Directed graph representation of a Markov chain model to study global earthquake sequencing leads to a time series of state-to-state transition probabilities that includes the spatio-temporally linked recurrent events in the record-breaking sense. A state refers to a configuration comprised of zones with either the occurrence or non-occurrence of an earthquake in each zone in a pre-determined time interval. Since the time series is derived from non-linear and non-stationary earthquake sequencing, we use known analysis methods to glean new information. We apply decomposition procedures such as ensemble empirical mode decomposition (EEMD) to study the state-to-state fluctuations in each of the intrinsic mode functions. We subject the intrinsic mode functions, derived from the time series using the EEMD, to a detailed analysis to draw information content of the time series. Also, we investigate the influence of random noise on the data-driven state-to-state transition probabilities. We consider a second aspect of earthquake sequencing that is closely tied to its time-correlative behaviour. Here, we extend the Fano factor and Allan factor analysis to the time series of state-to-state transition frequencies of a Markov chain. Our results support not only the usefulness of the intrinsic mode functions in understanding the time series but also the presence of power-law behaviour exemplified by the Fano factor and the Allan factor.
Wesson, R.L.
1987-01-01
The San Juan Bautista earthquake of October 3, 1972 (ML = 4.8), located along the San Andreas fault in central California, initiated an aftershock sequence characterized by a subtle, but perceptible, tendency for aftershocks to spread to the northwest and southeast along the fault zone. The apparent dimension of the aftershock zone along strike increased from about 7-10 km within a few days of the earthquake, to about 20 km eight months later. In addition, the mainshock initiated a period of accelerated fault creep, which was observed at 2 creep meters situated astride the trace of the San Andreas fault within about 15 km of the epicenter of the mainshock. The creep rate gradually returned to the preearthquake rate after about 3 yrs. Both the spreading of the aftershocks and the rapid surface creep are interpreted as reflecting a period of rapid creep in the fault zone representing the readjustment of stress and displacement following the failure of a "stuck" patch or asperity during the San Juan Bautista earthquake. Numerical calculations suggest that the behavior of the fault zone is consistent with that of a material characterized by a viscosity of about 3.6??1014 P, although the real rheology is likely to be more complicated. In this model, the mainshock represents the failure of an asperity that slips only during earthquakes. Aftershocks represent the failure of second-order asperities which are dragged along by the creeping fault zone. ?? 1987.
NASA Astrophysics Data System (ADS)
Castaldo, Raffaele; Tizzani, Pietro
2016-04-01
Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally
The 2015, Mw 6.5, Leucas (Ionian Sea, Greece) earthquake: Seismological and Geodetic Modelling
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Taymaz, Tuncay; Yolsal-Çevikbilen, Seda; Eken, Tuna; Moschas, Fanis; Stiros, Stathis
2016-04-01
A cluster of earthquakes (6
NASA Astrophysics Data System (ADS)
Furumura, Takashi; Imai, Kentaro; Maeda, Takuto
2011-02-01
Based on many recent findings such as those for geodetic data from Japan's GEONET nationwide GPS network and geological investigations of a tsunami-inundated Ryujin Lake in Kyushu, we present a revised source rupture model for the great 1707 Hoei earthquake that occurred in the Nankai Trough off southwestern Japan. The source rupture area of the new Hoei earthquake source model extends further, to the Hyuga-nada, more than 70 km beyond the currently accepted location at the westernmost end of Shikoku. Numerical simulation of the tsunami using a new source rupture model for the Hoei earthquake explains the distribution of the very high tsunami observed along the Pacific coast from western Shikoku to Kyushu more consistently. A simulation of the tsunami runup into Ryujin Lake using the onshore tsunami estimated by the new model demonstrates a tsunami inundation process; inflow and outflow speeds affect transport and deposition of sand in the lake and around the channel connecting it to the sea. Tsunamis from the 684 Tenmu, 1361 Shokei, and 1707 Hoei earthquakes deposited sand in Ryujin Lake and around the channel connecting it to the sea, but lesser tsunamis from other earthquakes were unable to reach Ryujin Lake. This irregular behavior suggests that in addition to the regular Nankai Trough earthquake cycle of 100-150 years, there is a hyperearthquake cycle of 300-500 years. These greater earthquakes produce the largest tsunamis from western Shikoku to Kyushu.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
Using GPS to Rapidly Detect and Model Earthquakes and Transient Deformation Events
NASA Astrophysics Data System (ADS)
Crowell, Brendan W.
The rapid modeling and detection of earthquakes and transient deformation is a problem of extreme societal importance for earthquake early warning and rapid hazard response. To date, GPS data is not used in earthquake early warning or rapid source modeling even in Japan or California where the most extensive geophysical networks exist. This dissertation focuses on creating algorithms for automated modeling of earthquakes and transient slip events using GPS data in the western United States and Japan. First, I focus on the creation and use of high-rate GPS and combined seismogeodetic data for applications in earthquake early warning and rapid slip inversions. Leveraging data from earthquakes in Japan and southern California, I demonstrate that an accurate magnitude estimate can be made within seconds using P wave displacement scaling, and that a heterogeneous static slip model can be generated within 2-3 minutes. The preliminary source characterization is sufficiently robust to independently confirm the extent of fault slip used for rapid assessment of strong ground motions and improved tsunami warning in subduction zone environments. Secondly, I investigate the automated detection of transient slow slip events in Cascadia using daily positional estimates from GPS. Proper geodetic characterization of transient deformation is necessary for studies of regional interseismic, coseismic and postseismic tectonics, and miscalculations can affect our understanding of the regional stress field. I utilize the relative strength index (RSI) from financial forecasting to create a complete record of slow slip from continuous GPS stations in the Cascadia subduction zone between 1996 and 2012. I create a complete history of slow slip across the Cascadia subduction zone, fully characterizing the timing, progression, and magnitude of events. Finally, using a combination of continuous and campaign GPS measurements, I characterize the amount of extension, shear and subsidence in the
Modeling the 2012 Wharton Basin Earthquakes off-Sumatra; Complete Lithospheric Failure (Invited)
NASA Astrophysics Data System (ADS)
Wei, S.; Helmberger, D. V.; Avouac, J.
2013-12-01
A sequence of large strike-slip earthquakes occurred west of Sunda Trench beneath the Wharton Basin. First reports indicate that the mainshock was extremely complex, involving three to four subevents (Mw>8) with a maze of aftershocks. We investigate slip models of the two largest earthquakes by joint inversion of regional and teleseismic waveform data. Using the Mw7.2 foreshock, we developed hybrid Green's Functions for the regional stations to approximate the mixture of oceanic and continental paths. The mainshock fault geometry is defined based on the back-projection results, point-source mechanisms, aftershock distribution and fine tune of grid searches. The fault system contains three faults, labeled F1 (89°/289° for dip/strike), F2 (74°/20°) and F3 (60°/310°). The inversion indicates that the main rupture consisted of a cascade of high stress drop asperities (up to 30MPa), extending as deep as 50 km. The rupture propagated smoothly from one fault to the next (F1, F2 and F3 in sequence) with rupture velocities of 2.0-2.5 km/s. The whole process lasted about 200s and the major moment release (>70%) took place on the N-S oriented F2. The Mw8.2 aftershock happened about 2hrs later on a N-S oriented fault with a relatively short duration (~60s) and also ruptured as deep as 50km. The slip distributions suggest that the earthquake sequence was part of a broad left-lateral shear zone between the Australian and Indian plates, and ruptured the whole lithosphere. These earthquakes apparently reactivated existing fracture zones and were probably triggered by unclamping of the great Sumatran earthquake of 2004.
Modeling the 2012 Wharton basin earthquakes off-Sumatra: Complete lithospheric failure
NASA Astrophysics Data System (ADS)
Wei, Shengji; Helmberger, Don; Avouac, Jean-Philippe
2013-07-01
sequence of large strike-slip earthquakes occurred west of Sunda Trench beneath the Wharton Basin. First reports indicate that the main shock was extremely complex, involving three to four subevents (Mw > 8) with a maze of aftershocks. We investigate slip models of the two largest earthquakes by joint inversion of regional and teleseismic waveform data. Using the Mw7.2 foreshock, we developed hybrid Green's Functions for the regional stations to approximate the mixture of oceanic and continental paths. The main shock fault geometry is defined based on the back projection results, point-source mechanisms, aftershock distribution, and fine tune of grid searches. The fault system contains three faults, labeled F1 (89°/289° for dip/strike), F2 (74°/20°), and F3 (60°/310°). The inversion indicates that the main rupture consisted of a cascade of high-stress drop asperities (up to 30 MPa), extending as deep as 50 km. The rupture propagated smoothly from one fault to the next (F1, F2, and F3 in sequence) with rupture velocities of 2.0-2.5 km/s. The whole process lasted about 200 s, and the major moment release (>70%) took place on the N-S oriented F2. The Mw8.2 aftershock happened about 2 h later on a N-S oriented fault with a relatively short duration (~60 s) and also ruptured as deep as 50 km. The slip distributions suggest that the earthquake sequence was part of a broad left-lateral shear zone between the Australian and Indian plates and ruptured the whole lithosphere. These earthquakes apparently reactivated existing fracture zones and were probably triggered by unclamping of the great Sumatran earthquake of 2004.
Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models
Geist, Eric L.
2015-01-01
Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.
A post-seismic deformation model after the 2010 earthquakes in Latin America
NASA Astrophysics Data System (ADS)
Sánchez, Laura; Drewes, Hermann; Schmidt, Michael
2015-04-01
The Maule 2010 earthquake in Chile generated the largest displacements of geodetic observation stations ever observed in terrestrial reference systems. Coordinate changes came up to 4 meters, and deformations were measurable in distances up to more than 1000 km from the epicentre. The station velocities in the regions adjacent to the epicentre changed dramatically after the seism; while they were oriented eastward with approximately 2 cm/year before the event, they are now directed westward with about 1 cm/year. The 2010 Baja California earthquake in Mexico produced displacements in the decimetre level also followed by anomalous velocity changes. The main problem in geodetic applications is that there is no reliable reference system to be used practically in the region. For geophysical applications we have to redefine the tectonic structure in South America. The area south of 35° S … 40° S was considered as a stable part of the South American plate. Now we see that there are large and extended crustal deformations. The paper presents a new multi-year velocity model computed from the Geocentric Reference System of the Americas (SIRGAS) including only the four years after the seismic events (mid-2010 … mid-2014). These velocities are used to derive a continuous deformation model of the entire Latin American region from Mexico to Tierra de Fuego. The model is compared with the same velocity model for SIRGAS (VEMOS2009) before the earthquakes.
Hamilton's law and the stability of nonconservative continuous systems
NASA Astrophysics Data System (ADS)
Bailey, C. D.
1980-03-01
The application of Hamilton's law of varying action to a nonconservative continuous system (a beam column) was demonstrated without the use of variational principles, D'Alembert's principle, differential equations, or work functions. Eigenvalues from the direct analytical solution are compared to eigenvalues from the exact solution for a wide range of the load parameter. Curves of eigenvalues vs load magnitude for the lowest four modes of the Beck problem are presented. First and second normalized modes for a tension load, no load, and the critical compressive load are plotted.
NASA Astrophysics Data System (ADS)
Borrero, Jose C.; Kalligeris, Nikos; Lynett, Patrick J.; Fritz, Hermann M.; Newman, Andrew V.; Convers, Jaime A.
2014-12-01
On 27 August 2012 (04:37 UTC, 26 August 10:37 p.m. local time) a magnitude M w = 7.3 earthquake occurred off the coast of El Salvador and generated surprisingly large local tsunami. Following the event, local and international tsunami teams surveyed the tsunami effects in El Salvador and northern Nicaragua. The tsunami reached a maximum height of ~6 m with inundation of up to 340 m inland along a 25 km section of coastline in eastern El Salvador. Less severe inundation was reported in northern Nicaragua. In the far-field, the tsunami was recorded by a DART buoy and tide gauges in several locations of the eastern Pacific Ocean but did not cause any damage. The field measurements and recordings are compared to numerical modeling results using initial conditions of tsunami generation based on finite-fault earthquake and tsunami inversions and a uniform slip model.
NASA Astrophysics Data System (ADS)
Huang, Ying; Bevans, W. J.; Xiao, Hai; Zhou, Zhi; Chen, Genda
2012-04-01
During or after an earthquake event, building system often experiences large strains due to shaking effects as observed during recent earthquakes, causing permanent inelastic deformation. In addition to the inelastic deformation induced by the earthquake effect, the post-earthquake fires associated with short fuse of electrical systems and leakage of gas devices can further strain the already damaged structures during the earthquakes, potentially leading to a progressive collapse of buildings. Under these harsh environments, measurements on the involved building by various sensors could only provide limited structural health information. Finite element model analysis, on the other hand, if validated by predesigned experiments, can provide detail structural behavior information of the entire structures. In this paper, a temperature dependent nonlinear 3-D finite element model (FEM) of a one-story steel frame is set up by ABAQUS based on the cited material property of steel from EN 1993-1.2 and AISC manuals. The FEM is validated by testing the modeled steel frame in simulated post-earthquake environments. Comparisons between the FEM analysis and the experimental results show that the FEM predicts the structural behavior of the steel frame in post-earthquake fire conditions reasonably. With experimental validations, the FEM analysis of critical structures could be continuously predicted for structures in these harsh environments for a better assistant to fire fighters in their rescue efforts and save fire victims.
Numerical model of the glacially-induced intraplate earthquakes and faults formation
NASA Astrophysics Data System (ADS)
Petrunin, Alexey; Schmeling, Harro
2016-04-01
According to the plate tectonics, main earthquakes are caused by moving lithospheric plates and are located mainly at plate boundaries. However, some of significant seismic events may be located far away from these active areas. The nature of the intraplate earthquakes remains unclear. It is assumed, that the triggering of seismicity in the eastern Canada and northern Europe might be a result of the glacier retreat during a glacial-interglacial cycle (GIC). Previous numerical models show that the impact of the glacial loading and following isostatic adjustment is able to trigger seismicity in pre-existing faults, especially during deglaciation stage. However this models do not explain strong glaciation-induced historical earthquakes (M5-M7). Moreover, numerous studies report connection of the location and age of major faults in the regions undergone by glaciation during last glacial maximum with the glacier dynamics. This probably imply that the GIC might be a reason for the fault system formation. Our numerical model provides analysis of the strain-stress evolution during the GIC using the finite volume approach realised in the numerical code Lapex 2.5D which is able to operate with large strains and visco-elasto-plastic rheology. To simulate self-organizing faults, the damage rheology model is implemented within the code that makes possible not only visualize faulting but also estimate energy release during the seismic cycle. The modeling domain includes two-layered crust, lithospheric mantle and the asthenosphere that makes possible simulating elasto-plastic response of the lithosphere to the glaciation-induced loading (unloading) and viscous isostatic adjustment. We have considered three scenarios for the model: horizontal extension, compression and fixed boundary conditions. Modeling results generally confirm suppressing seismic activity during glaciation phases whereas retreat of a glacier triggers earthquakes for several thousand years. Tip of the glacier
Regolith modeling and its relation to earthquake induced building damage: A remote sensing approach
NASA Astrophysics Data System (ADS)
Shafique, Muhammad; van der Meijde, Mark; Ullah, Saleem
2011-07-01
Regolith thickness is known as a major factor in influencing the intensity of earthquake induced ground shaking and consequently building damages. It is, however, often simplified or ignored due to its variable and complex nature. To evaluate the role of regolith thickness on earthquake induced building damage, a remote sensing based methodology is developed to model the spatial variation of regolith thickness, based on DEM derived topographic attributes and geology. Regolith thickness samples were evenly collected in geological formations at representative sites of topographic attributes. Topographic attributes (elevation, slope, TWI, distance from stream) computed from the ASTER derived DEM and a geology map were used to explore their role in spatial variation of regolith thickness. Stepwise regression was used to model the spatial variation of regolith thickness in erosional landscape of the study area. Topographic attributes and geology, explain 60% of regolith thickness variation in the study area. To test, if the modeled regolith can be used for prediction of seismic induced building damages, it is compared with the 2005 Kashmir earthquake induced building damages derived from high resolution remote sensing images and field data. The comparison shows that the structural damages increase with increasing regolith thickness. The predicted regolith thickness can be used for demarcating site prone to amplified seismic response.
A bilinear source-scaling model for M-log a observations of continental earthquakes
Hanks, T.C.; Bakun, W.H.
2002-01-01
The Wells and Coppersmith (1994) M-log A data set for continental earthquakes (where M is moment magnitude and A is fault area) and the regression lines derived from it are widely used in seismic hazard analysis for estimating M, given A. Their relations are well determined, whether for the full data set of all mechanism types or for the subset of strike-slip earthquakes. Because the coefficient of the log A term is essentially 1 in both their relations, they are equivalent to constant stress-drop scaling, at least for M ??? 7, where most of the data lie. For M > 7, however, both relations increasingly underestimate the observations with increasing M. This feature, at least for strike-slip earthquakes, is strongly suggestive of L-model scaling at large M. Using constant stress-drop scaling (???? = 26.7 bars) for M ??? 6.63 and L-model scaling (average fault slip u?? = ??L, where L is fault length and ?? = 2.19 × 10-5) at larger M, we obtain the relations M = log A + 3.98 ?? 0.03, A ??? 537 km2 and M = 4/3 log A + 3.07 ?? 0.04, A > 537 km2. These prediction equations of our bilinear model fit the Wells and Coppersmith (1994) data set well in their respective ranges of validity, the transition magnitude corresponding to A = 537 km2 being M = 6.71.
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2015-12-01
The 24 August 2014 Mw 6.0 South Napa, California earthquake produced significant offsets on 12 borehole strainmeters in the San Francisco Bay area. These strainmeters are located between 24 and 80 km from the source and the observed offsets ranged up to 400 parts-per-billion (ppb), which exceeds their nominal precision by a factor of 100. However, the observed offsets in tidally-calibrated strains have RMS deviation of 130 ppb from strains predicted by previously published moment tensor derived from seismic data. Here, I show that the large misfit can be reduced by a combination of better tidal calibration and better modeling of the strain field from the earthquake. Borehole strainmeters require in-situ calibration, which historically has been accomplished by comparing their measurements of Earth tides with the strain-tides predicted by a model. Although borehole strainmeters accurately measure the deformation within the borehole, the long-wavelength strain signals from tides or other tectonic processes recorded in the borehole are modified by the presence of the borehole and the elastic properties of the grout and the instrument. Previous analyses of surface-mounted, strainmeter data and their relationship with the predicted tides suggest that tidal models could be in error by 30%. The poor fit of the borehole strainmeter data from this earthquake can be improved by simultaneously varying the components of the model tides up to 30% and making small adjustments to the point-source model of the earthquake, which reduces the RMS misfit from 130 to 18 ppb. This suggests that calibrations derived solely from tidal models limits the accuracy of borehole strainmeters. On the other hand, the revised calibration derived here becomes testable on strain measurements from future, large Bay area events.
NASA Astrophysics Data System (ADS)
Glasscoe, Margaret T.; Wang, Jun; Pierce, Marlon E.; Yoder, Mark R.; Parker, Jay W.; Burl, Michael C.; Stough, Timothy M.; Granat, Robert A.; Donnellan, Andrea; Rundle, John B.; Ma, Yu; Bawden, Gerald W.; Yuen, Karen
2015-08-01
Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). This in turn is delivered through standards-compliant web services for desktop and hand-held devices.
A single-force model for the 1975 Kalapana, Hawaii, Earthquake
NASA Astrophysics Data System (ADS)
Eissler, Holly K.; Kanamori, Hiroo
1987-05-01
A single force mechanism is investigated as the source of long-period seismic radiation from the 1975 Kalapana, Hawaii, earthquake (MS = 7.1). The observed Love wave radiation pattern determined from the spectra of World-Wide Standard Seismograph Network and High Gain Long Period records at 100 s is two-lobed with azimuth, consistent with a near-horizontal single force acting opposite (strike ˜330°) to the observed displacement direction of the earthquake; this pattern is inconsistent with the expected double-couple pattern. Assuming a form of the force time history of a one-cycle sinusoid, the total duration of the event estimated from Rayleigh waves at two International Deployment of Accelerometers stations is approximately 180 s. The peak amplitude fo of the time function is 1 × 1015 N from amplitudes of Love and Rayleigh waves. The interpretation is that the bulk of the seismic radiation was produced by large-scale slumping of a large area of the south flank of Kilauea volcano. The single force is a crude representation of the effect on the earth of the motion of a partially decoupled large slide mass. Using the mass estimated from the tsunami generation area (˜ 1015-1016 kg), the peak acceleration of the slide block (0.1-1 m s-2) inferred from the seismic force is comparable with the acceleration due to gravity on a gently inclined plane. The slump model for the Kalapana earthquake is also more qualitatively consistent with the large horizontal deformation (8 m on land) and tsunami associated with the earthquake, which are difficult to explain with the conventional double-couple source model. The single-force source has been used previously to model the long-period seismic waves from the landslide accompanying the eruption of Mount St. Helens volcano, and may explain other anomalous seismic events as being due to massive slumping of sediments or unconsolidated material and not to elastic dislocation.
NASA Astrophysics Data System (ADS)
Rollins, Christopher; Barbot, Sylvain; Avouac, Jean-Philippe
2015-05-01
Due to its location on a transtensional section of the Pacific-North American plate boundary, the Salton Trough is a region featuring large strike-slip earthquakes within a regime of shallow asthenosphere, high heat flow, and complex faulting, and so postseismic deformation there may feature enhanced viscoelastic relaxation and afterslip that is particularly detectable at the surface. The 2010 El Mayor-Cucapah earthquake was the largest shock in the Salton Trough since 1892 and occurred close to the US-Mexico border, and so the postseismic deformation recorded by the continuous GPS network of southern California provides an opportunity to study the rheology of this region. Three-year postseismic transients extracted from GPS displacement time-series show four key features: (1) 1-2 cm of cumulative uplift in the Imperial Valley and 1 cm of subsidence in the Peninsular Ranges, (2) relatively large cumulative horizontal displacements 150 km from the rupture in the Peninsular Ranges, (3) rapidly decaying horizontal displacement rates in the first few months after the earthquake in the Imperial Valley, and (4) sustained horizontal velocities, following the rapid early motions, that were still visibly ongoing 3 years after the earthquake. Kinematic inversions show that the cumulative 3-year postseismic displacement field can be well fit by afterslip on and below the coseismic rupture, though these solutions require afterslip with a total moment equivalent to at least a earthquake and higher slip magnitudes than those predicted by coseismic stress changes. Forward modeling shows that stress-driven afterslip and viscoelastic relaxation in various configurations within the lithosphere can reproduce the early and later horizontal velocities in the Imperial Valley, while Newtonian viscoelastic relaxation in the asthenosphere can reproduce the uplift in the Imperial Valley and the subsidence and large westward displacements in the Peninsular Ranges. We present two forward
Helicity non-conserving form factor of the proton
Voutier, E.; Furget, C.; Knox, S.
1994-04-01
The study of the hadron structure in the high Q{sup 2} range contributes to the understanding of the mechanisms responsible for the confinement of quarks and gluons. Among the numerous experimental candidates sensitive to these mechanisms, the helicity non-conserving form factor of the proton is a privileged observable since it is controlled by non-perturbative effects. The authors investigate here the feasibility of high Q{sup 2} measurements of this form factor by means of the recoil polarization method in the context of the CEBAF 8 GeV facility. For that purpose, they discuss the development of a high energy proton polarimeter, based on the H({rvec p},pp) elastic scattering, to be placed at the focal plane of a new hadron spectrometer. It is shown that this experimental method significantly improves the knowledge of the helicity non-conserving form factor of the proton up to 10 GeV{sup 2}/c{sup 2}.
NASA Astrophysics Data System (ADS)
Juanes, R.; Jha, B.; Hager, B. H.; Shaw, J. H.; Plesch, A.; Astiz, L.; Dieterich, J. H.; Frohlich, C.
2016-07-01
Seismicity induced by fluid injection and withdrawal has emerged as a central element of the scientific discussion around subsurface technologies that tap into water and energy resources. Here we present the application of coupled flow-geomechanics simulation technology to the post mortem analysis of a sequence of damaging earthquakes (Mw = 6.0 and 5.8) in May 2012 near the Cavone oil field, in northern Italy. This sequence raised the question of whether these earthquakes might have been triggered by activities due to oil and gas production. Our analysis strongly suggests that the combined effects of fluid production and injection from the Cavone field were not a driver for the observed seismicity. More generally, our study illustrates that computational modeling of coupled flow and geomechanics permits the integration of geologic, seismotectonic, well log, fluid pressure and flow rate, and geodetic data and provides a promising approach for assessing and managing hazards associated with induced seismicity.
M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model
Parsons, Thomas E.
2006-01-01
Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.
Bayesian inversion for finite fault earthquake source models I—theory and algorithm
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.
2013-09-01
The estimation of finite fault earthquake source models is an inherently underdetermined problem: there is no unique solution to the inverse problem of determining the rupture history at depth as a function of time and space when our data are limited to observations at the Earth's surface. Bayesian methods allow us to determine the set of all plausible source model parameters that are consistent with the observations, our a priori assumptions about the physics of the earthquake source and wave propagation, and models for the observation errors and the errors due to the limitations in our forward model. Because our inversion approach does not require inverting any matrices other than covariance matrices, we can restrict our ensemble of solutions to only those models that are physically defensible while avoiding the need to restrict our class of models based on considerations of numerical invertibility. We only use prior information that is consistent with the physics of the problem rather than some artefice (such as smoothing) needed to produce a unique optimal model estimate. Bayesian inference can also be used to estimate model-dependent and internally consistent effective errors due to shortcomings in the forward model or data interpretation, such as poor Green's functions or extraneous signals recorded by our instruments. Until recently, Bayesian techniques have been of limited utility for earthquake source inversions because they are computationally intractable for problems with as many free parameters as typically used in kinematic finite fault models. Our algorithm, called cascading adaptive transitional metropolis in parallel (CATMIP), allows sampling of high-dimensional problems in a parallel computing framework. CATMIP combines the Metropolis algorithm with elements of simulated annealing and genetic algorithms to dynamically optimize the algorithm's efficiency as it runs. The algorithm is a generic Bayesian Markov Chain Monte Carlo sampler; it works
Packaged Fault Model for Geometric Segmentation of Active Faults Into Earthquake Source Faults
NASA Astrophysics Data System (ADS)
Nakata, T.; Kumamoto, T.
2004-12-01
In Japan, the empirical formula proposed by Matsuda (1975) mainly based on the length of the historical surface fault ruptures and magnitude, is generally applied to estimate the size of future earthquakes from the extent of existing active faults for seismic hazard assessment. Therefore validity of the active fault length and defining individual segment boundaries where propagating ruptures terminate are essential and crucial to the reliability for the accurate assessments. It is, however, not likely for us to clearly identify the behavioral earthquake segments from observation of surface faulting during the historical period, because most of the active faults have longer recurrence intervals than 1000 years in Japan. Besides uncertainties of the datasets obtained mainly from fault trenching studies are quite large for fault grouping/segmentation. This is why new methods or criteria should be applied for active fault grouping/segmentation, and one of the candidates may be geometric criterion of active faults. Matsuda (1990) used _gfive kilometer_h as a critical distance for grouping and separation of neighboring active faults. On the other hand, Nakata and Goto (1998) proposed the geometric criteria such as (1) branching features of active fault traces and (2) characteristic pattern of vertical-slip distribution along the fault traces as tools to predict rupture length of future earthquakes. The branching during the fault rupture propagation is regarded as an effective energy dissipation process and could result in final rupture termination. With respect to the characteristic pattern of vertical-slip distribution, especially with strike-slip components, the up-thrown sides along the faults are, in general, located on the fault blocks in the direction of relative strike-slip. Applying these new geometric criteria to the high-resolution active fault distribution maps, the fault grouping/segmentation could be more practically conducted. We tested this model
Stability of earthquake clustering models: criticality and branching ratios.
Zhuang, Jiancang; Werner, Maximilian J; Harte, David S
2013-12-01
We study the stability conditions of a class of branching processes prominent in the analysis and modeling of seismicity. This class includes the epidemic-type aftershock sequence (ETAS) model as a special case, but more generally comprises models in which the magnitude distribution of direct offspring depends on the magnitude of the progenitor, such as the branching aftershock sequence (BASS) model and another recently proposed branching model based on a dynamic scaling hypothesis. These stability conditions are closely related to the concepts of the criticality parameter and the branching ratio. The criticality parameter summarizes the asymptotic behavior of the population after sufficiently many generations, determined by the maximum eigenvalue of the transition equations. The branching ratio is defined by the proportion of triggered events in all the events. Based on the results for the generalized case, we show that the branching ratio of the ETAS model is identical to its criticality parameter because its magnitude density is separable from the full intensity. More generally, however, these two values differ and thus place separate conditions on model stability. As an illustration of the difference and of the importance of the stability conditions, we employ a version of the BASS model, reformulated to ensure the possibility of stationarity. In addition, we analyze the magnitude distributions of successive generations of the BASS model via analytical and numerical methods, and find that the compound density differs substantially from a Gutenberg-Richter distribution, unless the process is essentially subcritical (branching ratio less than 1) or the magnitude dependence between the parent event and the direct offspring is weak.
NASA Astrophysics Data System (ADS)
Maurer, J.; Segall, P.
2015-12-01
Understanding and predicting earthquake magnitudes from injection-induced seismicity is critically important for estimating hazard due to injection operations. A particular problem has been that the largest event often occurs post shut-in. A rigorous analysis would require modeling all stages of earthquake nucleation, propagation, and arrest, and not just initiation. We present a simple conceptual model for predicting the distribution of earthquake magnitudes during and following injection, building on the analysis of Segall & Lu (2015). The analysis requires several assumptions: (1) the distribution of source dimensions follows a Gutenberg-Richter distribution; (2) in environments where the background ratio of shear to effective normal stress is low, the size of induced events is limited by the volume perturbed by injection (e.g., Shapiro et al., 2013; McGarr, 2014), and (3) the perturbed volume can be approximated by diffusion in a homogeneous medium. Evidence for the second assumption comes from numerical studies that indicate the background ratio of shear to normal stress controls how far an earthquake rupture, once initiated, can grow (Dunham et al., 2011; Schmitt et al., submitted). We derive analytical expressions that give the rate of events of a given magnitude as the product of three terms: the time-dependent rate of nucleations, the probability of nucleating on a source of given size (from the Gutenberg-Richter distribution), and a time-dependent geometrical factor. We verify our results using simulations and demonstrate characteristics observed in real induced sequences, such as time-dependent b-values and the occurrence of the largest event post injection. We compare results to Segall & Lu (2015) as well as example datasets. Future work includes using 2D numerical simulations to test our results and assumptions; in particular, investigating how background shear stress and fault roughness control rupture extent.
Macgregor-Scott, N.; Walter, A.
1988-01-01
Crustal velocity structure for the region near Coalinga, California, has been derived from both earthquake and explosion seismic phase data recorded along a NW-SE seismic-refraction profile on the western flank of the Great Valley east of the Diablo Range. Comparison of the two data sets reveals P-wave phases in common which can be correlated with changes in the velocity structure below the earthquake hypocenters. In addition, the earthquake records reveal secondary phases at station ranges of less than 20 km that could be the result of S- to P-wave conversions at velocity interfaces above the earthquake hypocenters. Two-dimensional ray-trace modeling of the P-wave travel times resulted in a P-wave velocity model for the western flank of the Great Valley comprised of: 1) a 7- to 9-km thick section of sedimentary strata with velocities similar to those found elsewhere in the Great Valley (1.6 to 5.2 km s-1); 2) a middle crust extending to about 14 km depth with velocities comparable to those reported for the Franciscan assemblage in the Diablo Range (5.6 to 5.9 km s-1); and 3) a 13- to 14-km thick lower crust with velocities similar to those reported beneath the Diablo Range and the Great Valley (6.5 to 7.30 km s-1). This lower crust may have been derived from subducted oceanic crust that was thickened by accretionary underplating or crustal shortening. -Authors
Redefining Earthquakes and the Earthquake Machine
ERIC Educational Resources Information Center
Hubenthal, Michael; Braile, Larry; Taber, John
2008-01-01
The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…
Analogue models of subduction megathrust earthquakes: improving rheology and monitoring technique
NASA Astrophysics Data System (ADS)
Brizzi, Silvia; Corbi, Fabio; Funiciello, Francesca; Moroni, Monica
2015-04-01
Most of the world's great earthquakes (Mw > 8.5, usually known as mega-earthquakes) occur at shallow depths along the subduction thrust fault (STF), i.e., the frictional interface between the subducting and overriding plates. Spatiotemporal occurrences of mega-earthquakes and their governing physics remain ambiguous, as tragically demonstrated by the underestimation of recent megathrust events (i.e., 2011 Tohoku). To help unravel seismic cycle at STF, analogue modelling has become a key-tool. First properly scaled analogue models with realistic geometries (i.e., wedge-shaped) suitable for studying interplate seismicity have been realized using granular elasto-plastic [e.g., Rosenau et al., 2009] and viscoelastic materials [i.e., Corbi et al., 2013]. In particular, viscoelastic laboratory experiments realized with type A gelatin 2.5 wt% simulate, in a simplified yet robust way, the basic physics governing subduction seismic cycle and related rupture process. Despite the strength of this approach, analogue earthquakes are not perfectly comparable to their natural prototype. In this work, we try to improve subduction seismic cycle analogue models by modifying the rheological properties of the analogue material and adopting a new image analysis technique (i.e., PEP - ParticlE and Prediction velocity). We test the influence of lithosphere elasticity by using type A gelatin with greater concentration (i.e., 6 wt%). Results show that gelatin elasticity plays important role in controlling seismogenic behaviour of STF, tuning the mean and the maximum magnitude of analogue earthquakes. In particular, by increasing gelatin elasticity, we observe decreasing mean magnitude, while the maximum magnitude remains the same. Experimental results therefore suggest that lithosphere elasticity could be one of the parameters that tunes seismogenic behaviour of STF. To increase gelatin elasticity also implies improving similarities with their natural prototype in terms of coseismic
The mass balance of earthquakes and earthquake sequences
NASA Astrophysics Data System (ADS)
Marc, O.; Hovius, N.; Meunier, P.
2016-04-01
Large, compressional earthquakes cause surface uplift as well as widespread mass wasting. Knowledge of their trade-off is fragmentary. Combining a seismologically consistent model of earthquake-triggered landsliding and an analytical solution of coseismic surface displacement, we assess how the mass balance of single earthquakes and earthquake sequences depends on fault size and other geophysical parameters. We find that intermediate size earthquakes (Mw 6-7.3) may cause more erosion than uplift, controlled primarily by seismic source depth and landscape steepness, and less so by fault dip and rake. Such earthquakes can limit topographic growth, but our model indicates that both smaller and larger earthquakes (Mw < 6, Mw > 7.3) systematically cause mountain building. Earthquake sequences with a Gutenberg-Richter distribution have a greater tendency to lead to predominant erosion, than repeating earthquakes of the same magnitude, unless a fault can produce earthquakes with Mw > 8 or more.
Non-conservative evolution in short-period interacting binaries with the BINSTAR code
NASA Astrophysics Data System (ADS)
Deschamps, Romain; Siess, Lionel; Braun, Killian; Jorissen, Alain; Davis, Philip
2014-09-01
Systemic mass loss in interacting binaries such as of the Algol type has been inferred since the 1950s. There is indeed gathering indirect evidence indicating that some Algols follow non-conservative evolution, but still no direct detection of large mass outflows. As a result, little is known about the eventual ejection mechanism, the total amount of mass ejected or the specific angular momentum carried with this outflow. In order to reconcile stellar models and observations, we compute Algol models with the state-of-the-art binary star evolution code BINSTAR. We investigate systemic mass losses within the hotspot paradigm, where large outflows of material form from the accretion impact during the mass transfer phase. We then study the impact of this outflow on the spectral emission distribution of the system with the radiative transfer codes CLOUDY and SKIRT.
Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.
2008-01-01
We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
Relation of landslides triggered by the Kiholo Bay earthquake to modeled ground motion
Harp, Edwin L.; Hartzell, Stephen H.; Jibson, Randall W.; Ramirez-Guzman, L.; Schmitt, Robert G.
2014-01-01
The 2006 Kiholo Bay, Hawaii, earthquake triggered high concentrations of rock falls and slides in the steep canyons of the Kohala Mountains along the north coast of Hawaii. Within these mountains and canyons a complex distribution of landslides was triggered by the earthquake shaking. In parts of the area, landslides were preferentially located on east‐facing slopes, whereas in other parts of the canyons no systematic pattern prevailed with respect to slope aspect or vertical position on the slopes. The geology within the canyons is homogeneous, so we hypothesize that the variable landslide distribution is the result of localized variation in ground shaking; therefore, we used a state‐of‐the‐art, high‐resolution ground‐motion simulation model to see if it could reproduce the landslide‐distribution patterns. We used a 3D finite‐element analysis to model earthquake shaking using a 10 m digital elevation model and slip on a finite‐fault model constructed from teleseismic records of the mainshock. Ground velocity time histories were calculated up to a frequency of 5 Hz. Dynamic shear strain also was calculated and compared with the landslide distribution. Results were mixed for the velocity simulations, with some areas showing correlation of landslide locations with peak modeled ground motions but many other areas showing no such correlation. Results were much improved for the comparison with dynamic shear strain. This suggests that (1) rock falls and slides are possibly triggered by higher frequency ground motions (velocities) than those in our simulations, (2) the ground‐motion velocity model needs more refinement, or (3) dynamic shear strain may be a more fundamental measurement of the decoupling process of slope materials during seismic shaking.
Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor
2016-01-01
Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647
Dell’Acqua, F.; Gamba, P.; Jaiswal, K.
2012-01-01
This paper discusses spatial aspects of the global exposure dataset and mapping needs for earthquake risk assessment. We discuss this in the context of development of a Global Exposure Database for the Global Earthquake Model (GED4GEM), which requires compilation of a multi-scale inventory of assets at risk, for example, buildings, populations, and economic exposure. After defining the relevant spatial and geographic scales of interest, different procedures are proposed to disaggregate coarse-resolution data, to map them, and if necessary to infer missing data by using proxies. We discuss the advantages and limitations of these methodologies and detail the potentials of utilizing remote-sensing data. The latter is used especially to homogenize an existing coarser dataset and, where possible, replace it with detailed information extracted from remote sensing using the built-up indicators for different environments. Present research shows that the spatial aspects of earthquake risk computation are tightly connected with the availability of datasets of the resolution necessary for producing sufficiently detailed exposure. The global exposure database designed by the GED4GEM project is able to manage datasets and queries of multiple spatial scales.
6.9 Sikkim Earthquake and Modeling of Ground Motions to Determine Causative Fault
NASA Astrophysics Data System (ADS)
Chopra, Sumer; Sharma, Jyoti; Sutar, Anup; Bansal, B. K.
2014-07-01
In this study, source parameters of the September 18, 2011 M w 6.9, Sikkim earthquake were determined using acceleration records. These parameters were then used to generate strong motion at a number of sites using the stochastic finite fault modeling technique to constrain the causative fault plane for this earthquake. The average values of corner frequency, seismic moment, stress drop and source radius were 0.12 Hz, 3.07 × 1026 dyne-cm, 115 bars and 9.68 km, respectively. The fault plane solution showed strike-slip movement with two nodal planes oriented along two prominent lineaments in the region, the NE-oriented Kanchendzonga and NW-oriented Tista lineaments. The ground motions were estimated considering both the nodal planes as causative faults and the results in terms of the peak ground accelerations (PGA) and Fourier spectra were then compared with the actual recordings. We found that the NW-SE striking nodal plane along the Tista lineament may have been the causative fault for the Sikkim earthquake, as PGA estimates are comparable with the observed recordings. We also observed that the Fourier spectrum is not a good parameter in deciding the causative fault plane.
NASA Technical Reports Server (NTRS)
Pulinets, S.; Ouzounov, D.
2010-01-01
The paper presents a conception of complex multidisciplinary approach to the problem of clarification the nature of short-term earthquake precursors observed in atmosphere, atmospheric electricity and in ionosphere and magnetosphere. Our approach is based on the most fundamental principles of tectonics giving understanding that earthquake is an ultimate result of relative movement of tectonic plates and blocks of different sizes. Different kind of gases: methane, helium, hydrogen, and carbon dioxide leaking from the crust can serve as carrier gases for radon including underwater seismically active faults. Radon action on atmospheric gases is similar to the cosmic rays effects in upper layers of atmosphere: it is the air ionization and formation by ions the nucleus of water condensation. Condensation of water vapor is accompanied by the latent heat exhalation is the main cause for observing atmospheric thermal anomalies. Formation of large ion clusters changes the conductivity of boundary layer of atmosphere and parameters of the global electric circuit over the active tectonic faults. Variations of atmospheric electricity are the main source of ionospheric anomalies over seismically active areas. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model can explain most of these events as a synergy between different ground surface, atmosphere and ionosphere processes and anomalous variations which are usually named as short-term earthquake precursors. A newly developed approach of Interdisciplinary Space-Terrestrial Framework (ISTF) can provide also a verification of these precursory processes in seismically active regions. The main outcome of this paper is the unified concept for systematic validation of different types of earthquake precursors united by physical basis in one common theory.
Analysis of 2012 M8.6 Indian Ocean earthquake coseismic slip model based on GPS data
NASA Astrophysics Data System (ADS)
Maulida, Putra; Meilano, Irwan; Gunawan, Endra; Efendi, Joni
2016-05-01
The CGPS (Continuous Global Position System) data of Sumatran GPS Array (CGPS) and Indonesian Geospatial Agency (BIG) in Sumatra are processed to estimate the best fit coseismic model of 2012 M8.6 Indian Ocean earthquake. For GPS data processing, we used the GPS Analysis at Massachusetts Institute of Technology (GAMIT) 10.5 software and Global Kalman Filter (GLOBK) to generate position time series of each GPS stations and estimate the coseismic offset due to the Earthquake. The result from GPS processing indicates that the earthquake caused displacement northeast ward up to 25 cm in northern Sumatra. Results also show subsidence at the northern Sumatran while the central part of Sumatra show northwest direction displacement, but we cannot find whether the subsidence or the uplift signal associated to the earthquake due to the vertical data quality. Based on the GPS coseismic data, we evaluate the coseismic slip model of Indian Ocean Earthquake produced by previous study [1], [2], [3]. We calculated coseismic displacement using half-space with earthquake slip model input and compare it with the displacement produced form GPS data.
Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben
NASA Astrophysics Data System (ADS)
Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.
2016-08-01
Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy-or bias-problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a
Model parameter estimation bias induced by earthquake magnitude cut-off
NASA Astrophysics Data System (ADS)
Harte, D. S.
2016-02-01
We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.
Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben
NASA Astrophysics Data System (ADS)
Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.
2016-08-01
Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy—or bias—problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a
NASA Astrophysics Data System (ADS)
Dabaghi, Mayssa Nabil
A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as the natural variability of motions for a given earthquake and site scenario. By fitting the model to a database of recorded near-fault ground motions with known earthquake source and site characteristics, empirical "observations" of the model parameters are obtained. These observations are used to develop predictive equations for the model parameters in terms of a small number of earthquake source and site characteristics. Functional forms for the predictive equations that are consistent with seismological theory are employed. A site-based simulation procedure that employs the proposed stochastic model and predictive equations is developed to generate synthetic near-fault ground motions at a site. The procedure is formulated in terms of information about the earthquake design scenario that is normally available to a design engineer. Not all near-fault ground motions contain a forward directivity pulse, even when the conditions for such a pulse are favorable. The proposed procedure produces pulselike and non-pulselike motions in the same proportions as they naturally occur among recorded near-fault ground motions for a given design scenario. The proposed models and simulation procedure are validated by several means. Synthetic ground motion time series with fitted parameter values are compared with the corresponding recorded motions. The proposed empirical predictive relations are compared to similar relations available in the literature. The overall simulation procedure is
NASA Astrophysics Data System (ADS)
Belmont, Patrick; Stout, Justin
2013-04-01
Fine sediment is routed through landscapes and channel networks in a highly unsteady and non-uniform manner, potentially experiencing deposition and re-suspension many times during transport from source to sink. Developing a better understanding of sediment routing at the landscape scale is an intriguing challenge from a modeling perspective because it requires consideration of a multitude of processes that interact and vary in space and time. From an applied perspective, an improved understanding of sediment routing is essential for predicting how conservation and restoration practices within a watershed will influence water quality, to support land and water management decisions. Two key uncertainties in predicting sediment routing at the landscape scale are 1) determining the proportion of suspended sediment that is derived from terrestrial (soil) erosion versus channel (bank) erosion, and 2) constraining the proportion of sediment that is temporarily stored and re-suspended within the channel-floodplain complex. Sediment fingerprinting that utilizes a suite of conservative and non-conservative geochemical tracers associated with suspended sediment can provide insight regarding both of these key uncertainties. Here we present a model that tracks suspended sediment with associated conservative and non-conservative geochemical tracers. The model assumes that particle residence times are described by a bimodal distribution wherein some fraction of sediment is transported through the system in a relatively short time (< 1 year) and the remainder experiences temporary storage (of variable duration) within the channel-floodplain complex. We use the model to explore the downstream evolution of non-conservative tracers under equilibrium conditions (i.e., exchange between the channel and floodplain is allowed, but no net change in channel-floodplain storage can occur) to illustrate how the process of channel-floodplain storage and re-suspension can potentially bias
NASA Astrophysics Data System (ADS)
Alexandrakis, C.; Löberich, E.; Kieslich, A.; Calo, M.; Vavrycuk, V.; Buske, S.
2015-12-01
Earthquake swarms, fluid migration and gas springs are indications of the ongoing geodynamic processes within the West Bohemia seismic zone located at the Czech-German border. The possible relationship between the fluids, gas and seismicity is of particular interest and has motivated numerous past, ongoing and future studies, including a multidisciplinary monitoring proposal through the International Continental Scientific Drilling Program (ICDP). The most seismically active area within the West Bohemia seismic zone is located at the Czech town Nový Kostel. The Nový Kostel zone experiences frequent swarms of several hundreds to thousands of earthquakes over a period of weeks to several months. The seismicity is always located in the same area and depth range (~5-15 km), however the activated fault segments and planes differ. For example, the 2008 swarm activated faults along the southern end of the seismic zone, the 2011 swarm activated the northern segment, and the recent 2014 swarm activated the middle of the seismic zone. This indicates changes to the local stress field, and may relate to fluid migration and/or the complicated tectonic situation. The West Bohemia Seismic Network (WEBNET) is ideally located for studying the Nový Kostel swarm area and provides good azimuthal coverage. Here, we use the high quality P- and S-wave arrival picks recorded by WEBNET to calculate swarm-dependent velocity models for the 2008 and 2011 swarms, and an averaged (swarm independent) model using earthquakes recorded between 1991 and 2011. To this end, we use double-difference tomography to calculate P- and S-wave velocity models. The models are compared and examined in terms of swarm-dependent velocities and structures. Since the P-to-S velocity ratio is particularly sensitive to the presence of pore fluids, we derive ratio models directly from the inverted P- and S-wave models in order to investigate the potential influence of fluids on the seismicity. Finally, clustering
Specifying initial stress for dynamic heterogeneous earthquake source models
Andrews, D.J.; Barall, M.
2011-01-01
Dynamic rupture calculations using heterogeneous stress drop that is random and self-similar with a power-law spatial spectrum have great promise of producing realistic ground-motion predictions. We present procedures to specify initial stress for random events with a target rupture length and target magnitude. The stress function is modified in the depth dimension to account for the brittle-ductile transition at the base of the seismogenic zone. Self-similar fluctuations in stress drop are tied in this work to the long-wavelength stress variation that determines rupture length. Heterogeneous stress is related to friction levels in order to relate the model to physical concepts. In a variant of the model, there are high-stress asperities with low background stress. This procedure has a number of advantages: (1) rupture stops naturally, not at artificial barriers; (2) the amplitude of short-wavelength fluctuations of stress drop is not arbitrary: the spectrum is fixed to the long-wavelength fluctuation that determines rupture length; and (3) large stress drop can be confined to asperities occupying a small fraction of the total rupture area, producing slip distributions with enhanced peaks.
NASA Astrophysics Data System (ADS)
Shimamoto, T.; Noda, H.
2014-12-01
Establishment of a constitutive law from friction to high-temperature plastic flow has long been a task for solving problems such as modeling earthquakes and plate interactions. Here we propose an empirical constitutive law that describes this transitional behavior using only friction and flow parameters, with good agreements with experimental data on halite shear zones. The law predicts a complete spectrum of steady-state and transient behaviors, including the dependence of the shear resistance of a fault on slip rate, effective normal stress and temperature. The law predicts a change in velocity-weakening to velocity-strengthening with increasing temperature, very similar to the change recognized for granite under hydrothermal conditions. It is surprising that a slight deviation from the steady-state friction law due to the involvement of plastic deformation can cause a large change in the velocity dependence. We solved seismic cycles of a fault across the lithosphere with the friction to flow law using a 2D spectral boundary integral equation method, revealing dynamic rupture extending into the aseismic zone and very rich evolution of interseismic creep including slow slip prior to earthquakes. Seismic slip followed by creep is consistent with natural pseudotachylytes overprinted with mylonitic deformation. Our friction-to-flow law merges "Christmas-tree" strength profiles of the lithosphere and rate-dependency fault models used for earthquake modeling on a unified basis. Conventionally strength profiles were drawn assuming a strain rate for the flow regime, but we emphasize that stress distribution evolves reflecting the fault behavior. A fault-zone model was updated based on the earthquake modeling.
Physical modeling of volcanic tremor as repeating stick-slip earthquakes
NASA Astrophysics Data System (ADS)
Dmitrieva, K.; Dunham, E. M.
2011-12-01
One proposed explanation for volcanic tremor is the occurrence of repeating earthquakes, leading to a quasi-periodic signal on seismograms. A constant time interval between events leads, through the Dirac comb effect, to spectral peaks in the frequency domain, with the fundamental frequency given by the reciprocal of the interevent time. Gliding harmonic tremor, in which the frequencies of the spectral peaks vary with time, was observed before the 2009 eruption of Redoubt Volcano in Alaska [A. Hotovec, S. Prejean, J. Vidale and J. Gomberg, J. Volc. Geotherm. Res., submitted]. The fundamental frequency grew from 1 to over 20 Hz over the few minutes prior to the explosions, with seismicity then ceasing for 10 s before each explosion. We investigate the viability of the repeating earthquakes theory, using well-established physical models of earthquake cycles on frictional faults. Hotovec et al. locate the repeating earthquakes near the conduit about 1 km depth below the vent. They estimate a source dimension 10-100 m, assuming typical earthquake magnitude scaling laws. We analyze the fault mechanics with a spring-slider model with stiffness κ μ/R, where μ is the shear modulus and R is the fault dimension. The fault obeys a rate-and-state friction law. In response to a constant shear stressing rate α, the fault can either slide at constant velocity V=α/κ or undergo stick-slip oscillations. We perform a stability analysis on this system to determine the critical values of the parameters governing stick-slip and stable-sliding regimes. At high stressing rates it is necessary to consider inertial effects, captured here through the radiation damping approximation. Radiation damping stabilizes the system at sufficiently high α, namely α>αcr=κ2Lq/η, where q=σ(b-a)/(κL)-1, η=μ/c is the radiation damping parameter, c is the shear wave velocity, L is the state evolution distance, σ is the normal stress, and a and b are the usual rate and state friction
Non-conservative perturbations of homoclinic snaking scenarios
NASA Astrophysics Data System (ADS)
Knobloch, Jürgen; Vielitz, Martin
2016-01-01
Homoclinic snaking refers to the continuation of homoclinic orbits to an equilibrium E near a heteroclinic cycle connecting E and a periodic orbit P. Typically homoclinic snaking appears in one-parameter families of reversible, conservative systems. Here we discuss perturbations of this scenario which are both non-reversible and non-conservative. We treat this problem analytically in the spirit of the work [3]. The continuation of homoclinic orbits happens with respect to both the original continuation parameter μ and the perturbation parameter λ. The continuation curves are parametrised by the dwelling time L of the homoclinic orbit near P. It turns out that λ (L) tends to zero while the μ vs. L diagram displays isolas or criss-cross snaking curves in a neighbourhood of the original snakes-and-ladder structure. In the course of our studies we adapt both Fenichel coordinates near P and the analysis of Shilnikov problems near P to the present situation.
Rare nonconservative LRP6 mutations are associated with metabolic syndrome.
Singh, Rajvir; Smith, Emily; Fathzadeh, Mohsen; Liu, Wenzhong; Go, Gwang-Woong; Subrahmanyan, Lakshman; Faramarzi, Saeed; McKenna, William; Mani, Arya
2013-09-01
A rare mutation in LRP6 has been shown to underlie autosomal dominant coronary artery disease (CAD) and metabolic syndrome in an Iranian kindred. The prevalence and spectrum of LRP6 mutations in the disease population of the United States is not known. Two hundred white Americans with early onset familial CAD and metabolic syndrome and 2,000 healthy Northern European controls were screened for nonconservative mutations in LRP6. Three novel mutations were identified, which cosegregated with the metabolic traits in the kindreds of the affected subjects and none in the controls. All three mutations reside in the second propeller domain, which plays a critical role in ligand binding. Two of the mutations substituted highly conserved arginines in the second YWTD domain and the third substituted a conserved glycosylation site. The functional characterization of one of the variants showed that it impairs Wnt signaling and acts as a loss of function mutation.
The FrPNC experiment at TRIUMF: Atomic parity non-conservation in francium
NASA Astrophysics Data System (ADS)
Aubin, S.; Gomez, E.; Behr, J. A.; Pearson, M. R.; Sheng, D.; Zhang, J.; Collister, R.; Melconian, D.; Flambaum, V. V.; Sprouse, G. D.; Orozco, L. A.; Gwinner, G.
2012-09-01
The FrPNC collaboration has begun the construction of an on-line laser cooling and trapping apparatus at TRIUMF to measure atomic parity non-conservation (PNC) and the nuclear anapole moment in a string of artificially produced francium isotopes. Atomic PNC experiments provide unique high precision tests of the electroweak sector of the Standard Model at very low energies. Furthermore, precision measurements of spin-dependent atomic PNC can determine nuclear anapole moments and probe the weak force within the nucleus. Francium is an excellent candidate for precision measurements of atomic PNC due to its simple electronic structure and enhanced parity violation: both the optical PNC and anapole moment signals are expected to be over an order of magnitude larger than in cesium.
Pradhan, Mohan R; Pal, Arumay; Hu, Zhongqiao; Kannan, Srinivasaraghavan; Chee Keong, Kwoh; Lane, David P; Verma, Chandra S
2016-02-01
Aggregation is an irreversible form of protein complexation and often toxic to cells. The process entails partial or major unfolding that is largely driven by hydration. We model the role of hydration in aggregation using "Dehydrons." "Dehydrons" are unsatisfied backbone hydrogen bonds in proteins that seek shielding from water molecules by associating with ligands or proteins. We find that the residues at aggregation interfaces have hydrated backbones, and in contrast to other forms of protein-protein interactions, are under less evolutionary pressure to be conserved. Combining evolutionary conservation of residues and extent of backbone hydration allows us to distinguish regions on proteins associated with aggregation (non-conserved dehydron-residues) from other interaction interfaces (conserved dehydron-residues). This novel feature can complement the existing strategies used to investigate protein aggregation/complexation.
Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model
Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,
2013-01-01
In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Donnellan, A.; Parker, J. W.; Stough, T. M.; Burl, M. C.; Pierce, M.; Wang, J.; Ma, Y.; Rundle, J. B.; yoder, M. R.; Bawden, G. W.
2012-12-01
Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision-making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. Geodetic imaging data, including from inteferometric synthetic aperture radar (InSAR) and GPS, have a rich scientific heritage for use in earthquake research. Survey grade GPS was developed in the 1980s and the first InSAR image of an earthquake was produced for the 1992 Landers event. As more of these types of data have become increasingly available they have also shown great utility for providing key information for disaster response. Work has been done to translate these data into useful and actionable information for decision makers in the event of an earthquake disaster. In addition to observed data, modeling tools provide essential preliminary estimates while data are still being collected and/or processed, which can be refined as data products become available. Now, with more data and better models, we are able apply these to responders who need easy tools and routinely produced data products. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). E-DECIDER has taken advantage of the legacy of Earth science data, including MODIS, Landsat, SCIGN, PBO, UAVSAR, and modeling tools such as the ones developed by QuakeSim, in order to deliver successful decision support products for earthquake disaster response. The project has
Non-Conservative Variational Approximation for Nonlinear Schrodinger Equations and its Applications
NASA Astrophysics Data System (ADS)
Rossi, Julia M.
Recently, Galley [Phys. Rev. Lett. 110, 174301 (2013)] proposed an initial value problem formulation of Hamilton's principle applied to non-conservative systems. Here, we explore this formulation for complex partial differential equations of the nonlinear Schrodinger (NLS) type, using the non-conservative variational approximation (NCVA) outlined by Galley. We compare the formalism of the NCVA to two variational techniques used in dissipative systems; namely, the perturbed variational approximation and a generalization of the so-called Kantorovitch method. We showcase the relevance of the NCVA method by exploring test case examples within the NLS setting including combinations of linear and density dependent loss and gain. We also present an example applied to exciton-polariton condensates that intrinsically feature loss and a spatially dependent gain term. We also study a variant of the NLS used in optical systems called the Lugiato-Lefever (LL) model applied to (i) spontaneous temporal symmetry breaking instability in a coherently-driven optical Kerr resonator observed experimentally by Xu and Coen in Opt. Lett. 39, 3492 (2014) and (ii) temporal tweezing of cavity solitons in a passive loop of optical fiber pumped by a continuous-wave laser beam observed experimentally by Jang, Erkintalo, Coen, and Murdoch in Nat. Commun. 6, 7370 (2015). For application (i) we perform a detailed stability analysis and analyze the temporal bifurcation structure of stationary symmetric configurations and the emerging asymmetric states as a function of the pump power. For intermediate pump powers a pitchfork loop is responsible for the destabilization of symmetric states towards stationary asymmetric ones while at large pump powers we find the emergence of periodic asymmetric solutions via a Hopf bifurcation. For application (ii) we study the existence and dynamics of cavity solitons through phase-modulation of the holding beam. We find parametric regions for the manipulation of
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Inverse and Forward Modeling of The 2014 Iquique Earthquake with Run-up Data
NASA Astrophysics Data System (ADS)
Fuentes, M.
2015-12-01
The April 1, 2014 Mw 8.2 Iquique earthquake excited a moderate tsunami which turned on the national alert of tsunami threat. This earthquake was located in the well-known seismic gap in northern Chile which had a high seismic potential (~ Mw 9.0) after the two main large historic events of 1868 and 1877. Nonetheless, studies of the seismic source performed with seismic data inversions suggest that the event exhibited a main patch located around 19.8° S at 40 km of depth with a seismic moment equivalent to Mw = 8.2. Thus, a large seismic deficit remains in the gap being capable to release an event of Mw = 8.8-8.9. To understand the importance of the tsunami threat in this zone, a seismic source modeling of the Iquique Earthquake is performed. A new approach based on stochastic k2 seismic sources is presented. A set of those sources is generated and for each one, a full numerical tsunami model is performed in order to obtain the run-up heights along the coastline. The results are compared with the available field run-up measurements and with the tide gauges that registered the signal. The comparison is not uniform; it penalizes more when the discrepancies are larger close to the peak run-up location. This criterion allows to identify the best seismic source from the set of scenarios that explains better the observations from a statistical point of view. By the other hand, a L2 norm minimization is used to invert the seismic source by comparing the peak nearshore tsunami amplitude (PNTA) with the run-up observations. This method searches in a space of solutions the best seismic configuration by retrieving the Green's function coefficients in order to explain the field measurements. The results obtained confirm that a concentrated down-dip patch slip adequately models the run-up data.
Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment
NASA Technical Reports Server (NTRS)
Rebbapragada, Umaa; Oommen, Thomas
2011-01-01
On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.
Stephenson, William J.
2007-01-01
INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.
McCrory, Patricia A.; Blair, J. Luke; Oppenheimer, David H.; Walter, Stephen R.
2004-01-01
We present an updated model of the Juan de Fuca slab beneath southern British Columbia, Washington, Oregon, and northern California, and use this model to separate earthquakes occurring above and below the slab surface. The model is based on depth contours previously published by Fluck and others (1997). Our model attempts to rectify a number of shortcomings in the original model and update it with new work. The most significant improvements include (1) a gridded slab surface in geo-referenced (ArcGIS) format, (2) continuation of the slab surface to its full northern and southern edges, (3) extension of the slab surface from 50-km depth down to 110-km beneath the Cascade arc volcanoes, and (4) revision of the slab shape based on new seismic-reflection and seismic-refraction studies. We have used this surface to sort earthquakes and present some general observations and interpretations of seismicity patterns revealed by our analysis. For example, deep earthquakes within the Juan de Fuca Plate beneath western Washington define a linear trend that may mark a tear within the subducting plate Also earthquakes associated with the northern stands of the San Andreas Fault abruptly terminate at the inferred southern boundary of the Juan de Fuca slab. In addition, we provide files of earthquakes above and below the slab surface and a 3-D animation or fly-through showing a shaded-relief map with plate boundaries, the slab surface, and hypocenters for use as a visualization tool.
Focal Depth of the WenChuan Earthquake Aftershocks from modeling of Seismic Depth Phases
NASA Astrophysics Data System (ADS)
Luo, Y.; Zeng, X.; Chong, J.; Ni, S.; Chen, Y.
2008-12-01
After the 05/12/2008 great WenChuan earthquake in Sichuan Province of China, tens of thousands earthquakes occurred with hundreds of them stronger than M4. Those aftershocks provide valuable information about seismotectonics and rupture processes for the mainshock, particularly accurate spatial distribution of aftershocks is very informational for determining rupture fault planes. However focal depth can not be well resolved just with first arrivals recorded by relatively sparse network in Sichuan Province, therefore 3D seismicity distribution is difficult to obtain though horizontal location can be located with accuracy of 5km. Instead local/regional depth phases such as sPmP, sPn, sPL and teleseismic pP,sP are very sensitive to depth, and be readily modeled to determine depth with accuracy of 2km. With reference 1D velocity structure resolved from receiver functions and seismic refraction studies, local/regional depth phases such as sPmP, sPn and sPL are identified by comparing observed waveform with synthetic seismograms by generalized ray theory and reflectivity methods. For teleseismic depth phases well observed for M5.5 and stronger events, we developed an algorithm in inverting both depth and focal mechanism from P and SH waveforms. Also we employed the Cut and Paste (CAP) method developed by Zhao and Helmberger in modeling mechanism and depth with local waveforms, which constrains depth by fitting Pnl waveforms and the relative weight between surface wave and Pnl. After modeling all the depth phases for hundreds of events , we find that most of the M4 earthquakes occur between 2-18km depth, with aftershocks depth ranging 4-12km in the southern half of Longmenshan fault while aftershocks in the northern half featuring large depth range up to 18km. Therefore seismogenic zone in the northern segment is deeper as compared to the southern segment. All the aftershocks occur in upper crust, given that the Moho is deeper than 40km, or even 60km west of the
Geist, Eric L.; Titov, Vasily V.; Arcas, Diego; Pollitz, Fred F.; Bilek, Susan L.
2007-01-01
Results from different tsunami forecasting and hazard assessment models are compared with observed tsunami wave heights from the 26 December 2004 Indian Ocean tsunami. Forecast models are based on initial earthquake information and are used to estimate tsunami wave heights during propagation. An empirical forecast relationship based only on seismic moment provides a close estimate to the observed mean regional and maximum local tsunami runup heights for the 2004 Indian Ocean tsunami but underestimates mean regional tsunami heights at azimuths in line with the tsunami beaming pattern (e.g., Sri Lanka, Thailand). Standard forecast models developed from subfault discretization of earthquake rupture, in which deep- ocean sea level observations are used to constrain slip, are also tested. Forecast models of this type use tsunami time-series measurements at points in the deep ocean. As a proxy for the 2004 Indian Ocean tsunami, a transect of deep-ocean tsunami amplitudes recorded by satellite altimetry is used to constrain slip along four subfaults of the M >9 Sumatra–Andaman earthquake. This proxy model performs well in comparison to observed tsunami wave heights, travel times, and inundation patterns at Banda Aceh. Hypothetical tsunami hazard assessments models based on end- member estimates for average slip and rupture length (Mw 9.0–9.3) are compared with tsunami observations. Using average slip (low end member) and rupture length (high end member) (Mw 9.14) consistent with many seismic, geodetic, and tsunami inversions adequately estimates tsunami runup in most regions, except the extreme runup in the western Aceh province. The high slip that occurred in the southern part of the rupture zone linked to runup in this location is a larger fluctuation than expected from standard stochastic slip models. In addition, excess moment release (∼9%) deduced from geodetic studies in comparison to seismic moment estimates may generate additional tsunami energy, if the
NASA Astrophysics Data System (ADS)
Munnangi, P.
2015-12-01
The Bay Area is one of the world's most vulnerable places to earthquakes, and being ready is vital to survival. The purpose of this study was to determine the distribution of places affected in a 7.0 Hayward Earthquake and the effectiveness of earthquake early warning (EEW) in this scenario. We manipulated three variables: the location of the epicenter, the station placement, and algorithm used for early warning. To compute the blind zone and warning times, we calculated the P and S wave velocities by using data from the Northern California Earthquake Catalog and the radius of the blind zone using appropriate statistical models. We came up with a linear regression model directly relating warning time and distance from the epicenter. We used Google Earth to plot three hypothetical epicenters on the Hayward Fault and determine which establishments would be affected. By varying the locations, the blind zones and warning times changed. As the radius from the epicenter increased, the warning times also increased. The intensity decreased as the distance from the epicenter grew. We determined which cities were most vulnerable. We came up with a list of cities and their predicted warning times in this hypothetical scenario. For example, for the epicenter in northern Hayward, the cities at most risk were San Pablo, Richmond, and surrounding cities, while the cities at least risk were Gilroy, Modesto, Lincoln, and other cities within that radius. To find optimal station placement, we chose two cities with stations placed variable distances apart from each other. There was more variability in scattered stations than dense stations, suggesting stations placed closer together are more effective since they provide precise warnings. We compared the algorithms ElarmS, which is currently used in the California Integrated Seismic Network (CISN) and Onsite, which is a single-sensor approach that uses one to two stations, by calculating the blind zone and warning times for each
NASA Astrophysics Data System (ADS)
Alvarado, Patricia; Ramos, Victor A.
2011-04-01
We investigate the seismic properties of modern crustal seismicity in the northwestern Sierras Pampeanas of the Andean retroarc region of Argentina. We modelled the complete regional seismic broadband waveforms of two crustal earthquakes that occurred in the Sierra de Velasco on 28 May 2002 and in the Sierra de Ambato on 7 September 2004. For each earthquake we obtained the seismic moment tensor inversion (SMTI) and tested for its focal depth. Our results indicate mainly thrust focal mechanism solutions of magnitudes Mw 5.8 and 6.2 and focal depths of 10 and 8 km, respectively. These results represent the larger seismicity and shallower focal depths in the last 100 years in this region. The SMTI 2002 and 2004 solutions are consistent with previous determinations for crustal seismicity in this region that also used seismic waveform modelling. Taken together, the results for crustal seismicity of magnitudes ≥5.0 in the last 30 years are consistent with an average P-axis horizontally oriented by an azimuth of 125° and T-axis orientation of azimuth 241° and plunge 58°. This modern crustal seismicity and the historical earthquakes are associated with two active reverse faulting systems of opposite vergences bounding the eastern margin of the Sierra de Velasco in the south and the southwestern margin of the Sierra de Ambato in the north. Strain recorded by focal mechanisms of the larger seismicity is very consistent over this region and is in good agreement with neotectonic activity during the last 11,000 years by Costa (2008) and Casa et al. (in press); this shows that the dominant deformation in this part of the Sierras Pampeanas is mainly controlled by contraction. Seismic deformation related to propagation of thrusts and long-lived shear zones of this area permit to disregard previous proposals, which suggested an extensional or sinistral regime for the geomorphic evolution since Pleistocene.
NASA Astrophysics Data System (ADS)
McCormack, K. A.; Hesse, M. A.; Stadler, G.
2015-12-01
Remote sensing and geodetic measurements are providing a new wealth of spatially distributed, time-series data that have the ability to improve our understanding of co-seismic rupture and post-seismic processes in subduction zones. We formulate a Bayesian inverse problem to infer the slip distribution on the plate interface using an elastic finite element model and GPS surface deformation measurements. We present an application to the co-seismic displacement during the 2012 earthquake on the Nicoya Peninsula in Costa Rica, which is uniquely positioned close to the Middle America Trench and directly over the seismogenic zone of the plate interface. The results of our inversion are then used as an initial condition in a coupled poroelastic forward model to investigate the role of poroelastic effects on post-seismic deformation and stress transfer. From this study we identify a horseshoe-shaped rupture area with a maximum slip of approximately 2.5 meters surrounding a locked patch that is likely to release stress in the future. We model the co-seismic pore pressure change as well as the pressure evolution and resulting deformation in the months after the earthquake. The results of the forward model indicate that earthquake-induced pore pressure changes dissipate quickly near the surface, resulting in relaxation of the surface in the seven to ten days following the earthquake. Near the subducting slab interface, pore pressure changes are an order of magnitude larger and may persist for many months after the earthquake.
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.
2016-01-01
The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.
Stanley, Dal; Villaseñor, Antonio; Benz, Harley
1999-01-01
The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This
NASA Astrophysics Data System (ADS)
Ioki, Kei; Tanioka, Yuichiro
2016-01-01
Paleotsunami researches revealed that a great earthquake occurred off eastern Hokkaido, Japan and generated a large tsunami in the 17th century. Tsunami deposits from this event have been found at far inland from the Pacific coast in eastern Hokkaido. Previous study estimated the fault model of the 17th century great earthquake by comparing locations of lowland tsunami deposits and computed tsunami inundation areas. Tsunami deposits were also traced at high cliff near the coast as high as 18 m above the sea level. Recent paleotsunami study also traced tsunami deposits at other high cliffs along the Pacific coast. The fault model estimated from previous study cannot explain the tsunami deposit data at high cliffs near the coast. In this study, we estimated the fault model of the 17th century great earthquake to explain both lowland widespread tsunami deposit areas and tsunami deposit data at high cliffs near the coast. We found that distributions of lowland tsunami deposits were mainly explained by wide rupture area at the plate interface in Tokachi-Oki segment and Nemuro-Oki segment. Tsunami deposits at high cliff near the coast were mainly explained by very large slip of 25 m at the shallow part of the plate interface near the trench in those segments. The total seismic moment of the 17th century great earthquake was calculated to be 1.7 ×1022 Nm (Mw 8.8). The 2011 great Tohoku earthquake ruptured large area off Tohoku and very large slip amount was found at the shallow part of the plate interface near the trench. The 17th century great earthquake had the same characteristics as the 2011 great Tohoku earthquake.
Irrera, Alessia; Magazzù, Alessandro; Artoni, Pietro; Simpson, Stephen H; Hanna, Simon; Jones, Philip H; Priolo, Francesco; Gucciardi, Pietro Giuseppe; Maragò, Onofrio M
2016-07-13
We measure, by photonic torque microscopy, the nonconservative rotational motion arising from the transverse components of the radiation pressure on optically trapped, ultrathin silicon nanowires. Unlike spherical particles, we find that nonconservative effects have a significant influence on the nanowire dynamics in the trap. We show that the extreme shape of the trapped nanowires yields a transverse component of the radiation pressure that results in an orbital rotation of the nanowire about the trap axis. We study the resulting motion as a function of optical power and nanowire length, discussing its size-scaling behavior. These shape-dependent nonconservative effects have implications for optical force calibration and optomechanics with levitated nonspherical particles.
NASA Technical Reports Server (NTRS)
Reches, Ze'ev; Schubert, Gerald; Anderson, Charles
1994-01-01
We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude
Helmstetter, A; Sornette, D
2002-12-01
The epidemic-type aftershock sequence (ETAS) model is a simple stochastic process modeling seismicity, based on the two best-established empirical laws, the Omori law (power-law decay approximately 1/t(1+theta) of seismicity after an earthquake) and Gutenberg-Richter law (power-law distribution of earthquake energies). In order to describe also the space distribution of seismicity, we use in addition a power-law distribution approximately 1/r(1+mu) of distances between triggered and triggering earthquakes. The ETAS model has been studied for the last two decades to model real seismicity catalogs and to obtain short-term probabilistic forecasts. Here, we present a mapping between the ETAS model and a class of CTRW (continuous time random walk) models, based on the identification of their corresponding master equations. This mapping allows us to use the wealth of results previously obtained on anomalous diffusion of CTRW. After translating into the relevant variable for the ETAS model, we provide a classification of the different regimes of diffusion of seismic activity triggered by a mainshock. Specifically, we derive the relation between the average distance between aftershocks and the mainshock as a function of the time from the mainshock and of the joint probability distribution of the times and locations of the aftershocks. The different regimes are fully characterized by the two exponents theta and mu. Our predictions are checked by careful numerical simulations. We stress the distinction between the "bare" Omori law describing the seismic rate activated directly by a mainshock and the "renormalized" Omori law taking into account all possible cascades from mainshocks to aftershocks of aftershock of aftershock, and so on. In particular, we predict that seismic diffusion or subdiffusion occurs and should be observable only when the observed Omori exponent is less than 1, because this signals the operation of the renormalization of the bare Omori law, also at the
Modeling Recent Large Earthquakes Using the 3-D Global Wave Field
NASA Astrophysics Data System (ADS)
Hjörleifsdóttir, V.; Kanamori, H.; Tromp, J.
2003-04-01
We use the spectral-element method (SEM) to accurately compute waveforms at periods of 40 s and longer for three recent large earthquakes using 3D Earth models and finite source models. The M_w~7.6, Jan~26, 2001, Bhuj, India event had a small rupture area and is well modeled at long periods with a point source. We use this event as a calibration event to investigate the effects of 3-D Earth models on the waveforms. The M_w~7.9, Nov~11, 2001, Kunlun, China, event exhibits a large directivity (an asymmetry in the radiation pattern) even at periods longer than 200~s. We used the source time function determined by Kikuchi and Yamanaka (2001) and the overall pattern of slip distribution determined by Lin et al. to guide the wave-form modeling. The large directivity is consistent with a long fault, at least 300 km, and an average rupture speed of 3±0.3~km/s. The directivity at long periods is not sensitive to variations in the rupture speed along strike as long as the average rupture speed is constant. Thus, local variations in rupture speed cannot be ruled out. The rupture speed is a key parameter for estimating the fracture energy of earthquakes. The M_w~8.1, March~25, 1998, event near the Balleny Islands on the Antarctic Plate exhibits large directivity in long period surface waves, similar to the Kunlun event. Many slip models have been obtained from body waves for this earthquake (Kuge et al. (1999), Nettles et al. (1999), Antolik et al. (2000), Henry et al. (2000) and Tsuboi et al. (2000)). We used the slip model from Henry et al. to compute SEM waveforms for this event. The synthetic waveforms show a good fit to the data at periods from 40-200~s, but the amplitude and directivity at longer periods are significantly smaller than observed. Henry et al. suggest that this event comprised two subevents with one triggering the other at a distance of 100 km. To explain the observed directivity however, a significant amount of slip is required between the two subevents
Improving Earthquake-Explosion Discrimination using Attenuation Models of the Crust and Upper Mantle
Pasyanos, M E; Walter, W R; Matzel, E M; Rodgers, A J; Ford, S R; Gok, R; Sweeney, J J
2009-07-06
In the past year, we have made significant progress on developing and calibrating methodologies to improve earthquake-explosion discrimination using high-frequency regional P/S amplitude ratios. Closely-spaced earthquakes and explosions generally discriminate easily using this method, as demonstrated by recordings of explosions from test sites around the world. In relatively simple geophysical regions such as the continental parts of the Yellow Sea and Korean Peninsula (YSKP) we have successfully used a 1-D Magnitude and Distance Amplitude Correction methodology (1-D MDAC) to extend the regional P/S technique over large areas. However in tectonically complex regions such as the Middle East, or the mixed oceanic-continental paths for the YSKP the lateral variations in amplitudes are not well predicted by 1-D corrections and 1-D MDAC P/S discrimination over broad areas can perform poorly. We have developed a new technique to map 2-D attenuation structure in the crust and upper mantle. We retain the MDAC source model and geometrical spreading formulation and use the amplitudes of the four primary regional phases (Pn, Pg, Sn, Lg), to develop a simultaneous multi-phase approach to determine the P-wave and S-wave attenuation of the lithosphere. The methodology allows solving for attenuation structure in different depth layers. Here we show results for the P and S-wave attenuation in crust and upper mantle layers. When applied to the Middle East, we find variations in the attenuation quality factor Q that are consistent with the complex tectonics of the region. For example, provinces along the tectonically-active Tethys collision zone (e.g. Turkish Plateau, Zagros) have high attenuation in both the crust and upper mantle, while the stable outlying regions like the Indian Shield generally have low attenuation. In the Arabian Shield, however, we find that the low attenuation in this Precambrian crust is underlain by a high-attenuation upper mantle similar to the nearby Red
NASA Astrophysics Data System (ADS)
Delavaud, E.; Scherbaum, F.; Riggelsen, C.
2008-12-01
Considering the increasing number of ground motion prediction equations (GMPE) available for seismic hazard assessment, there is a huge need for an efficient and robust method to select and rank these models. Although information contained in macroseismic intensities is not yet perfectly understood, we believe that this under-exploited large amount of data can be successfully used for model selection. We apply criteria based on information theory to rank GMPE for Californian type earthquakes from both pseudo-spectral accelerations (PSA) and macroseismic intensities (MI) data. Synthetics are computed for 10 Californian earthquakes by 22 different GMPE for PSA and combined with the equation by Atkinson and Sonley (2000) for MI. In order to reduce uncertainty, we take into account the fault geometry to directly compute the intrinsic distance metric of each prediction equation. Site effects and inter-event variability are also incorporated. Rankings based on PSA and MI data are found to be consistent, and we explore the relative information of intensity versus response spectral data. We test the robustness of this information-theoretic method, which is presented in a companion paper (Scherbaum et al., 2008) in more details, and also perform a sensitivity study, in terms of sampling and extended source parameters.
Geodetic measurements and kinematic modeling of the 2014 Iquique-Pisagua earthquake
NASA Astrophysics Data System (ADS)
Moreno, Marcos; Bedford, Jonathan; Li, Shaoyang; Bartsch, Mitja; Schurr, Bernd; Oncken, Onno; Klotz, Jürgen; Baez, Juan Carlos; Gonzalez, Gabriel
2015-04-01
The Northern portion of the Chilean margin is considered to be a large and longstanding seismic gap based on the magnitude and time of the last great earthquake (Mw=8.8 in 1877). The central fraction of the gap was affected by the 2014 Iquique-Pisagua earthquake (Mw=8.1), which was preceded by an unusual series of foreshocks and transient deformation. The Integrated Plate Boundary Observatory Chile (IPOC) has extensively monitored the seismic gap with various geophysical and geodetic techniques, providing an excellent temporal and spatial data coverage to analyze the kinematics of the plate interface leading up to the mainshock with unprecedented resolution. We use a viscoelastic Finite-Element Model to investigate the subduction zone mechanisms that are responsible for the observed GPS deformation field during the interseismic, coseismic and early postseismic periods. Furthermore, we separate the relative contributions of aseismic and seismic slip to the transient deformation leading up to and following the mainshock. Our analyses of the foreshocks and continuous-GPS transient signals indicate that seismic slip dominates over aseismic slip, and that slow slip was not a factor in the build up to the Mw=8.1 mainshock. Hence, the observed transient signals before the Iquique-Pisagua event can be explained by deformation due to foreshock seismicity, which was triggered after a Mw=6.7 event in a splay fault. High coseismic slip concentrated on a previously highly locked area that exhibited low amount of seismicity before the event. Foreshocks gradually occupied the center of the locked patch decreasing the mechanical strength of the plate contact. The first two months of aseismic postseismic deformation shows cumulative displacements up to 10 cm around the rupture area. The early postseismic afterslip only accounts for about 20 % of the coseismic seismic moment. We conclude that the foreshock activity may have decreased the effective friction on the locked patch
Optimizing the Parameters of the Rate-and-State Constitutive Law in an Earthquake Clustering Model
NASA Astrophysics Data System (ADS)
Console, R.; Murru, M.; Catalli, F.
2004-12-01
The phenomenon of earthquake clustering, i.e. the increase of occurrence probability for seismic events close in space and time to other previous earthquakes, has been modeled both by statistical and physical processes. From a statistical viewpoint, the so-called epidemic model (ETAS) introduced by Ogata in 1988 and its variations have become fairly well known in the seismological community. Tests on real seismicity and comparison with a plain time-independent Poissonian model through likelihood-based methods have reliably proved their validity. On the other hand, in the last decade many papers have been published on the so-called Coulomb stress change principle, based on the theory of elasticity, showing qualitatively that an increase of the Coulomb stress in a given area is usually associated with an increase of seismic activity. More specifically, the rate-and-state theory developed by Dieterich in the `90s has been able to give a physical justification to the phenomenon known as Omori law. According to this law, a mainshock is followed by a series of aftershocks whose frequency decreases in time as an inverse power law. In this study we give an outline of the above mentioned stochastic and physical models, and build up an approach by which these models can be merged in a single algorithm and statistically tested. The application to the seismicity of Japan from 1970 to 2003 shows that the new model incorporating the physical concept of the rate-and-state theory performs even better of the purely stochastic model with a smaller number of free parameters
Crustal deformation, the earthquake cycle, and models of viscoelastic flow in the asthenosphere
NASA Technical Reports Server (NTRS)
Cohen, S. C.; Kramer, M. J.
1983-01-01
The crustal deformation patterns associated with the earthquake cycle can depend strongly on the rheological properties of subcrustal material. Substantial deviations from the simple patterns for a uniformly elastic earth are expected when viscoelastic flow of subcrustal material is considered. The detailed description of the deformation pattern and in particular the surface displacements, displacement rates, strains, and strain rates depend on the structure and geometry of the material near the seismogenic zone. The origin of some of these differences are resolved by analyzing several different linear viscoelastic models with a common finite element computational technique. The models involve strike-slip faulting and include a thin channel asthenosphere model, a model with a varying thickness lithosphere, and a model with a viscoelastic inclusion below the brittle slip plane. The calculations reveal that the surface deformation pattern is most sensitive to the rheology of the material that lies below the slip plane in a volume whose extent is a few times the fault depth. If this material is viscoelastic, the surface deformation pattern resembles that of an elastic layer lying over a viscoelastic half-space. When the thickness or breath of the viscoelastic material is less than a few times the fault depth, then the surface deformation pattern is altered and geodetic measurements are potentially useful for studying the details of subsurface geometry and structure. Distinguishing among the various models is best accomplished by making geodetic measurements not only near the fault but out to distances equal to several times the fault depth. This is where the model differences are greatest; these differences will be most readily detected shortly after an earthquake when viscoelastic effects are most pronounced.
NASA Astrophysics Data System (ADS)
Matsuzawa, T.; Shibazaki, B.; Obara, K.; Hirose, H.
2014-12-01
We numerically simulate slow slip events (SSEs) along the Eurasian-Philippine sea plate boundary in southwestern Japan, to examine the behavior of SSEs in the seismic cycles of megathrust earthquakes within a single model. In our previous study (Matsuzawa et al., 2013), long- and short-term SSEs were reproduced in the Shikoku region, considering the distribution of tremor and the configuration of subducting plate. However, the variation in a seismic cycle was not discussed, because calculated duration is short and modeled region is not sufficiently large to simulate seismic cycles. In this study, we model SSEs and megathrust earthquakes along the subduction zone from the Shikoku to the Tokai region in southwestern Japan. In our numerical model, the rate- and state-dependent friction law (RS-law) with cut-off velocities is adopted. We assume low effective normal stress and negative (a-b) value in the RS-law at the long- and short-term SSE region. We model the configuration of plate interface by triangular elements based on Baba et al. (2006), Shiomi et al. (2008), and Ide et al. (2010). Our numerical model reproduces recurrences of long- and short-term SSEs, and the segments of short-term SSEs. The recurrence intervals of short-term SSEs slightly decrease at the late stage of a seismic cycle, reflecting the increase of long-term averaged slip rate in the short-term SSE region as found in a flat plate model (Matsuzawa et al., 2010). This decrease is more clearly found in the Kii region than those in the Shikoku region. Perhaps, this difference may be attributed to the width between the short-term SSE region and the locked region of megathrust earthquakes, as the stress disturbance from transient SSEs, which occur between the locked region and the short-term SSE region (e.g. Matsuzawa et al., 2010, 2013), seems to be relatively small and infrequent due to the narrow width in the Kii region. In addition, as the plate configuration is relatively flat in the Kii region
Equivalent strike-slip earthquake cycles in half-space and lithosphere-asthenosphere earth models
Savage, J.C.
1990-01-01
By virtue of the images used in the dislocation solution, the deformation at the free surface produced throughout the earthquake cycle by slippage on a long strike-slip fault in an Earth model consisting of an elastic plate (lithosphere) overlying a viscoelastic half-space (asthenosphere) can be duplicated by prescribed slip on a vertical fault embedded in an elastic half-space. Inversion of 1973-1988 geodetic measurements of deformation across the segment of the San Andreas fault in the Transverse Ranges north of Los Angeles for the half-space equivalent slip distribution suggests no significant slip on the fault above 30 km and a uniform slip rate of 36 mm/yr below 30 km. One equivalent lithosphere-asthenosphere model would have a 30-km thick lithosphere and an asthenosphere relaxation time greater than 33 years, but other models are possible. -from Author
On Chinese earthquake history - An attempt to model an incomplete data set by point process analysis
Lee, W.H.K.; Brillinger, D.R.
1979-01-01
Since the 1950s, the Academia Sinica in Peking, People's Republic of China, has carried out extensive research on the Chinese earthquake history. With a historical record dating back some 3000 years, a wealth of information on Chinese earthquakes exists. Despite this monumental undertaking by the Academia Sinica, much work is still necessary to correct the existing earthquake data for historical changes in population, customs, modes of communication, and dynasties. In this paper we report on the status of our investigation of Chinese earthquake history and present some preliminary results. By applying point process analysis of earthquakes in 'Central China', we found suggestions of (1) lower earthquake activity at intervals of about 175 years and 375 years, and (2) higher earthquake activity at an interval of about 300 years. ?? 1979 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata; Basili, Roberto; Barba, Salvatore
2016-07-01
This study presents a series of self-correcting models that are obtained by integrating information about seismicity and fault sources in Italy. Four versions of the stress release model are analyzed, in which the evolution of the system over time is represented by the level of strain, moment, seismic energy, or energy scaled by the moment. We carry out the analysis on a regional basis by subdividing the study area into eight tectonically coherent regions. In each region, we reconstruct the seismic history and statistically evaluate the completeness of the resulting seismic catalog. Following the Bayesian paradigm, we apply Markov chain Monte Carlo methods to obtain parameter estimates and a measure of their uncertainty expressed by the simulated posterior distribution. The comparison of the four models through the Bayes factor and an information criterion provides evidence (to different degrees depending on the region) in favor of the stress release model based on the energy and the scaled energy. Therefore, among the quantities considered, this turns out to be the measure of the size of an earthquake to use in stress release models. At any instant, the time to the next event turns out to follow a Gompertz distribution, with a shape parameter that depends on time through the value of the conditional intensity at that instant. In light of this result, the issue of forecasting is tackled through both retrospective and prospective approaches. Retrospectively, the forecasting procedure is carried out on the occurrence times of the events recorded in each region, to determine whether the stress release model reproduces the observations used in the estimation procedure. Prospectively, the estimates of the time to the next event are compared with the dates of the earthquakes that occurred after the end of the learning catalog, in the 2003-2012 decade.
NASA Astrophysics Data System (ADS)
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input
NASA Astrophysics Data System (ADS)
Lohman, Rowena B.; Simons, Mark; Savage, Brian
2002-06-01
We use interferometric synthetic aperture radar (InSAR) and broadband seismic waveform data to estimate source parameters of the 29 June 1992, Ms 5.4 Little Skull Mountain (LSM) earthquake. This event occurred within a geodetic network designed to measure the strain rate across the region around Yucca Mountain. The LSM earthquake complicates interpretation of the existing GPS and trilateration data, as the earthquake magnitude is sufficiently small that seismic data do not tightly constrain the epicenter but large enough to potentially affect the geodetic observations. We model the InSAR data using a finite dislocation in a layered elastic space. We also invert regional seismic waveforms both alone and jointly with the InSAR data. Because of limitations in the existing data set, InSAR data alone cannot determine the area of the fault plane independent of magnitude of slip nor the location of the fault plane independent of the earthquake mechanism. Our seismic waveform data tightly constrain the mechanism of the earthquake but not the location. Together, the two complementary data types can be used to determine the mechanism and location but cannot distinguish between the two potential conjugate fault planes. Our preferred model has a moment of ~3.2 × 1017 N m (Mw 5.6) and predicts a line length change between the Wahomie and Mile geodetic benchmarks of ~5 mm.
Foxall, W.
1992-11-01
Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.
NASA Astrophysics Data System (ADS)
Croissant, Thomas; Lague, Dimitri; Davy, Philippe; Steer, Philippe
2016-04-01
In active mountain ranges, large earthquakes (Mw > 5-6) trigger numerous landslides that impact river dynamics. These landslides bring local and sudden sediment piles that will be eroded and transported along the river network causing downstream changes in river geometry, transport capacity and erosion efficiency. The progressive removal of landslide materials has implications for downstream hazards management and also for understanding landscape dynamics at the timescale of the seismic cycle. The export time of landslide-derived sediments after large-magnitude earthquakes has been studied from suspended load measurements but a full understanding of the total process, including the coupling between sediment transfer and channel geometry change, still remains an issue. Note that the transport of small sediment pulses has been studied in the context of river restoration, but the magnitude of sediment pulses generated by landslides may make the problem different. Here, we study the export of large volumes (>106 m3) of sediments with the 2D hydro-morphodynamic model, Eros. This model uses a new hydrodynamic module that resolves a reduced form of the Saint-Venant equations with a particle method. It is coupled with a sediment transport and lateral and vertical erosion model. Eros accounts for the complex retroactions between sediment transport and fluvial geometry, with a stochastic description of the floods experienced by the river. Moreover, it is able to reproduce several features deemed necessary to study the evacuation of large sediment pulses, such as river regime modification (single-thread to multi-thread), river avulsion and aggradation, floods and bank erosion. Using a synthetic and simple topography we first present how granulometry, landslide volume and geometry, channel slope and flood frequency influence 1) the dominance of pulse advection vs. diffusion during its evacuation, 2) the pulse export time and 3) the remaining volume of sediment in the catchment
Modelling coseismic displacements during the 1997 Umbria-Marche earthquake (central Italy)
NASA Astrophysics Data System (ADS)
Hunstad, Ingrid; Anzidei, Marco; Cocco, Massimo; Baldi, Paolo; Galvani, Alessandro; Pesci, Arianna
1999-11-01
We propose a dislocation model for the two normal faulting earthquakes that struck the central Apennines (Umbria-Marche, Italy) on 1997 September 26 at 00:33 (Mw 5.7) and 09:40 GMT (Mw 6.0). We fit coseismic horizontal and vertical displacements resulting from GPS measurements at several monuments of the IGMI (Istituto Geografico Militare Italiano) by means of a dislocation model in an elastic, homogeneous, isotropic half-space. Our best-fitting model consists of two normal faults whose mechanisms and seismic moments have been taken from CMT solutions; it is consistent with other seismological and geophysical observations. The first fault, which is 6 km long and 7 km wide, ruptured during the 00:33 event with a unilateral rupture towards the SE and an average slip of 27 cm. The second fault is 12 km long and 10 km wide, and ruptured during the 09:40 event with a nearly unilateral rupture towards the NW. Slip distribution on this second fault is non-uniform and is concentrated in its SE portion (maximum slip is 65 cm), where rupture initiated. The 00:33 fault is deeper than the 09:40 one: the top of the first rupture is deeper than 1.7 km the top of the second is 0.6 km deep. In order to interpret the observed epicentral subsidence we have also considered the contributions of two further moderate-magnitude earthquakes that occurred on 1997 October 3 (Mw 5.2) and 6 (Mw 5.4), immediately before the GPS survey, and were located very close to the 09:40 event of September 26. We compare the pattern of vertical displacements resulting from our forward modelling of GPS data with that derived from SAR interferograms: the fit to SAR data is very good, confirming the reliability of the proposed dislocation model.
Solomon Islands 2007 Tsunami Near-Field Modeling and Source Earthquake Deformation
NASA Astrophysics Data System (ADS)
Uslu, B.; Wei, Y.; Fritz, H.; Titov, V.; Chamberlin, C.
2008-12-01
The earthquake of 1 April 2007 left behind momentous footages of crust rupture and tsunami impact along the coastline of Solomon Islands (Fritz and Kalligeris, 2008; Taylor et al., 2008; McAdoo et al., 2008; PARI, 2008), while the undisturbed tsunami signals were also recorded at nearby deep-ocean tsunameters and coastal tide stations. These multi-dimensional measurements provide valuable datasets to tackle the challenging aspects at the tsunami source directly by inversion from tsunameter records in real time (available in a time frame of minutes), and its relationship with the seismic source derived either from the seismometer records (available in a time frame of hours or days) or from the crust rupture measurements (available in a time frame of months or years). The tsunami measurements in the near field, including the complex vertical crust motion and tsunami runup, are particularly critical to help interpreting the tsunami source. This study develops high-resolution inundation models for the Solomon Islands to compute the near-field tsunami impact. Using these models, this research compares the tsunameter-derived tsunami source with the seismic-derived earthquake sources from comprehensive perceptions, including vertical uplift and subsidence, tsunami runup heights and their distributional pattern among the islands, deep-ocean tsunameter measurements, and near- and far-field tide gauge records. The present study stresses the significance of the tsunami magnitude, source location, bathymetry and topography in accurately modeling the generation, propagation and inundation of the tsunami waves. This study highlights the accuracy and efficiency of the tsunameter-derived tsunami source in modeling the near-field tsunami impact. As the high- resolution models developed in this study will become part of NOAA's tsunami forecast system, these results also suggest expanding the system for potential applications in tsunami hazard assessment, search and rescue operations
Inverse kinematic and forward dynamic models of the 2002 Denali fault earthquake, Alaska
Oglesby, D.D.; Dreger, Douglas S.; Harris, R.A.; Ratchkovski, N.; Hansen, R.
2004-01-01
We perform inverse kinematic and forward dynamic models of the M 7.9 2002 Denali fault, Alaska, earthquake to shed light on the rupture process and dynamics of this event, which took place on a geometrically complex fault system in central Alaska. We use a combination of local seismic and Global Positioning System (GPS) data for our kinematic inversion and find that the slip distribution of this event is characterized by three major asperities on the Denali fault. The rupture nucleated on the Susitna Glacier thrust fault, and after a pause, propagated onto the strike-slip Denali fault. Approximately 216 km to the east, the rupture abandoned the Denali fault in favor of the more southwesterly directed Totschunda fault. Three-dimensional dynamic models of this event indicate that the abandonment of the Denali fault for the Totschunda fault can be explained by the Totschunda fault's more favorable orientation with respect to the local stress field. However, a uniform tectonic stress field cannot explain the complex slip pattern in this event. We also find that our dynamic models predict discontinuous rupture from the Denali to Totschunda fault segments. Such discontinuous rupture helps to qualitatively improve our kinematic inverse models. Two principal implications of our study are (1) a combination of inverse and forward modeling can bring insight into earthquake processes that are not possible with either technique alone, and (2) the stress field on geometrically complex fault systems is most likely not due to a uniform tectonic stress field that is resolved onto fault segments of different orientations; rather, other forms of stress heterogeneity must be invoked to explain the observed slip patterns.
Non-conservative optical forces and Brownian vortexes
NASA Astrophysics Data System (ADS)
Sun, Bo
Optical manipulation using optical tweezers has been widely adopted in physics, chemical engineering and biology. While most applications and fundamental studies of optical trapping have focused on optical forces resulting from intensity gradients, we have also explored the role of radiation pressure, which is directed by phase gradients in beams of light. Interestingly, radiation pressure turns out to be a non-conservative force and drives trapped objects out of thermodynamic equilibrium with their surrounding media. We have demonstrated the resulting nonequilibrium effects experimentally by tracking the thermally driven motions of optically trapped colloidal spheres using holographic video microscopy. Rather than undergoing equilibrium thermal fluctuations, as has been assumed for more than a quarter century, a sphere in an optical tweezer enters into a stochastic steady-state characterized by closed loops in its probability current density. These toroidal vortexes constitute a bias in the particle's otherwise random thermal fluctuations arising at least indirectly from a solenoidal component in the optical force. This surprising effect is a particular manifestation of a more general class of noise-driven machines that we call Brownian vortexes. This previously unrecognized class of stochastic heat engines operates on qualitatively different principles from such extensively studied nonequilibrium systems as thermal ratchets and Brownian motors. Among its interesting properties, a Brownian vortex can reverse its direction with changes in temperature or equivalent control parameters.
Rodgers, A
2000-12-28
This is an informal report on preliminary efforts to investigate earthquake focal mechanisms and earth structure in the Anatolian (Turkish) Plateau. Seismic velocity structure of the crust and upper mantle and earthquake focal parameters for event in the Anatolian Plateau are estimated from complete regional waveforms. Focal mechanisms, depths and seismic moments of moderately large crustal events are inferred from long-period (40-100 seconds) waveforms and compared with focal parameters derived from global teleseismic data. Using shorter periods (10-100 seconds) we estimate the shear and compressional velocity structure of the crust and uppermost mantle. Results are broadly consistent with previous studies and imply relatively little crustal thickening beneath the central Anatolian Plateau. Crustal thickness is about 35 km in western Anatolia and greater than 40 km in eastern Anatolia, however the long regional paths require considerable averaging and limit resolution. Crustal velocities are lower than typical continental averages, and even lower than typical active orogens. The mantle P-wave velocity was fixed to 7.9 km/s, in accord with tomographic models. A high sub-Moho Poisson's Ratio of 0.29 was required to fit the Sn-Pn differential times. This is suggestive of high sub-Moho temperatures, high shear wave attenuation and possibly partial melt. The combination of relatively thin crust in a region of high topography and high mantle temperatures suggests that the mantle plays a substantial role in maintaining the elevation.
Model for episodic flow of high-pressure water in fault zones before earthquakes
Byerlee, J.
1993-01-01
In this model for the evolution of large crustal faults, water originally from the country rock saturates the porous and permeable fault zone. During shearing, the fault zone compacts and water flows back into the country rock, but the flow is arrested by silicate deposition that forms low permeability seals. The fluid will be confined to seal-bounded fluid compartments of various sizes and porosity that are not hydraulically connected with each other. When the seal between two compartments is ruptured, an electrical streaming potential will be generated by the sudden movement of fluid from the high-pressure compartment to the low-pressure compartment. During an earthquake the width of the fault zone will increase by failure of the geometric irregularities on the fault. This newly created, porous and permeable, wider fault zone will fill with water, and the process described above will be repeated. Thus, the process is episodic with the water moving in and out of the fault zone, and each large earthquake should be preceded by an electrical and/or magnetic signal. -from Author
Shallow low-velocity zone of the San Jacinto fault from local earthquake waveform modelling
NASA Astrophysics Data System (ADS)
Yang, Hongfeng; Zhu, Lupei
2010-10-01
We developed a method to determine the depth extent of low-velocity zone (LVZ) associated with a fault zone (FZ) using S-wave precursors from local earthquakes. The precursors are diffracted S waves around the edges of LVZ and their relative amplitudes to the direct S waves are sensitive to the LVZ depth. We applied the method to data recorded by three temporary arrays across three branches of the San Jacinto FZ. The FZ dip was constrained by differential traveltimes of P waves between stations at two side of the FZ. Other FZ parameters (width and velocity contrast) were determined by modelling waveforms of direct and FZ-reflected P and S waves. We found that the LVZ of the Buck Ridge fault branch has a width of ~150 m with a 30-40 per cent reduction in Vp and a 50-60 per cent reduction in Vs. The fault dips 70 +/- 5° to southwest and its LVZ extends only to 2 +/- 1 km in depth. The LVZ of the Clark Valley fault branch has a width of ~200 m with 40 per cent reduction in Vp and 50 per cent reduction in Vs. The Coyote Creek branch is nearly vertical and has a LVZ of ~150 m in width and of 25 per cent reduction in Vp and 50 per cent reduction in Vs. The LVZs of these three branches are not centred at the surface fault trace but are located to their northeast, indicating asymmetric damage during earthquakes.
Thrust-type subduction-zone earthquakes and seamount asperites: A physical model for seismic rupture
Cloos, M. )
1992-07-01
A thrust-type subduction-zone earthquake of M{sub W} 7.6 ruptures an area of {approximately}6,000 km{sup 2}, has a seismic slip of {approximately}1 m, and is nucleated by the rupture of an asperity {approximately}25km across. A model for thrust-type subduction-zone seismicity is proposed in which basaltic seamounts jammed against the base of the overriding plate act as strong asperities that rupture by stick-slip faulting. A M{sub W} 7.6 event would correspond to the near-basal rupture of a {approximately}2-km-tall seamount. The base of the seamount is surrounded by a low shear-strength layer composed of subducting sediment that also deforms between seismic events by distributed strain (viscous flow). Planar faults form in this layer as the seismic rupture propagates out of the seamount at speeds of kilometers per second. The faults in the shear zone are disrupted after the event by aseismic, slow viscous flow of the subducting sediment layer. Consequently, the extent of fault rupture varies for different earthquakes nucleated at the same seamount asperity because new fault surfaces form in the surrounding subducting sediment layer during each fast seismic rupture.
Block model of western US kinematics from inversion of geodetic, fault slip, and earthquake data
NASA Astrophysics Data System (ADS)
McCaffrey, R.
2003-12-01
The active deformation of the southwestern US (30° to 41° N) is represented by a finite number of rotating, elastic spherical caps. Horizontal GPS velocities (1583), fault slip rates (94), and earthquake slip vectors (116) are inverted for block angular velocities, locking on block-bounding faults, and the rotation of individual GPS velocity fields relative to North America. GPS velocities are modeled as a combination of rigid block rotations and elastic strain rates resulting from interactions of adjacent blocks across bounding faults. The resulting Pacific - North America pole is indistinguishable from that of Beavan et al. (2001) and satisfies spreading in the Gulf of California and earthquake slip vectors in addition to GPS. The largest blocks, the Sierra Nevada - Great Valley and the eastern Basin and Range, show internal strain rates, after removing the elastic component, of only a few nanostrain/a, demonstrating long term approximately rigid behavior. Most fault slip data are satisfied except that the San Jacinto fault appears to be significantly faster than inferred from geology while the Coachella and San Bernardino segments of the San Andreas fault are slower, suggesting the San Andreas system is straightening out in Southern California. Vertical axis rotation rates for most blocks are clockwise and in magnitude more like the Pacific than North America. One exception is the eastern Basin and Range (242° E to 248° E) which rotates slowly anticlockwise about a pole offshore Baja.
Extending earthquakes' reach through cascading.
Marsan, David; Lengliné, Olivier
2008-02-22
Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.
NASA Astrophysics Data System (ADS)
Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène
2016-04-01
The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).
Large-scale numerical modeling of hydro-acoustic waves generated by tsunamigenic earthquakes
NASA Astrophysics Data System (ADS)
Cecioni, C.; Abdolali, A.; Bellotti, G.; Sammarco, P.
2015-03-01
Tsunamigenic fast movements of the seabed generate pressure waves in weakly compressible seawater, namely hydro-acoustic waves, which travel at the sound celerity in water (about 1500 m s-1). These waves travel much faster than the counterpart long free-surface gravity waves and contain significant information on the source. Measurement of hydro-acoustic waves can therefore anticipate the tsunami arrival and significantly improve the capability of tsunami early warning systems. In this paper a novel numerical model for reproduction of hydro-acoustic waves is applied to analyze the generation and propagation in real bathymetry of these pressure perturbations for two historical catastrophic earthquake scenarios in Mediterranean Sea. The model is based on the solution of a depth-integrated equation, and therefore results are computationally efficient in reconstructing the hydro-acoustic waves propagation scenarios.
A multilayer model of time dependent deformation following an earthquake on a strike-slip fault
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1981-01-01
A multilayer model of the Earth to calculate finite element of time dependent deformation and stress following an earthquake on a strike slip fault is discussed. The model involves shear properties of an elastic upper lithosphere, a standard viscoelastic linear solid lower lithosphere, a Maxwell viscoelastic asthenosphere and an elastic mesosphere. Systematic variations of fault and layer depths and comparisons with simpler elastic lithosphere over viscoelastic asthenosphere calculations are analyzed. Both the creep of the lower lithosphere and astenosphere contribute to the postseismic deformation. The magnitude of the deformation is enhanced by a short distance between the bottom of the fault (slip zone) and the top of the creep region but is less sensitive to the thickness of the creeping layer. Postseismic restressing is increased as the lower lithosphere becomes more viscoelastic, but the tendency for the width of the restressed zone to growth with time is retarded.
NASA Astrophysics Data System (ADS)
Harding, D. J.; Miuller, J. R.
2005-12-01
Modeling the kinematics of the 2004 Great Sumatra-Andaman earthquake is limited in the northern two-thirds of the rupture zone by a scarcity of near-rupture geodetic deformation measurements. Precisely repeated Ice, Cloud, and Land Elevation Satellite (ICESat) profiles across the Andaman and Nicobar Islands provide a means to more fully document the spatial pattern of surface vertical displacements and thus better constrain geomechanical modeling of the slip distribution. ICESat profiles that total ~45 km in length cross Car Nicobar, Kamorta, and Katchall in the Nicobar chain. Within the Andamans, the coverage includes ~350 km on North, Central, and South Andaman Islands along two NNE and NNW-trending profiles that provide elevations on both the east and west coasts of the island chain. Two profiles totaling ~80 km in length cross South Sentinel Island, and one profile ~10 km long crosses North Sentinel Island. With an average laser footprint spacing of 175 m, the total coverage provides over 2700 georeferenced surface elevations measurements for each operations period. Laser backscatter waveforms recorded for each footprint enable detection of forest canopy top and underlying ground elevations with decimeter vertical precision. Surface elevation change is determined from elevation profiles, acquired before and after the earthquake, that are repeated with a cross-track separation of less than 100 m by precision pointing of the ICESat spacecraft. Apparent elevation changes associated with cross-track offsets are corrected according to local slopes calculated from multiple post-earthquake repeated profiles. The surface deformation measurements recorded by ICESat are generally consistent with the spatial distribution of uplift predicted by a preliminary slip distribution model. To predict co-seismic surface deformation, we apply a slip distribution, derived from the released energy distribution computed by Ishii et al. (2005), as the displacement discontinuity
A Unification of Earthquake Cycle and Structural Evolution Models for Thrust Faults
NASA Astrophysics Data System (ADS)
Meade, B. J.
2014-12-01
Geodetic observations of interseismic deformation near dip-slip faults may be used to estimate slip rates on both isolated structures and across geometrically complex thrust systems. Interpreting these kinematic measurements requires integrating the effects of interseismic elastic strain accumulation from quasi-static earthquake cycle models. While a kinematically consistent theory for planar thick-skinned models has been widely applied the theory for thin-skinned models has remained less satisfactory due to an inadequate treatment of vertical velocities. Here we develop a kinematically consistent model of horizontal and vertical interseismic deformation in thin-skinned thrust systems including non-planar faults. The key aspect of this model is the integration of kinematic structural evolution models with elastic deformation models. Predictions include localized interseismic hanging wall uplift as well as smoothly varying horizontal and vertical velocities. Additionally, this model implies slightly modified patterns of elastic coseismic deformation in the hanging wall including coseismic folding. The interseismic deformation model described here provides a step toward more unified interpretation of both decadal-scale geodetic observations and long-term tectonic uplift.
Modelling the ionospheric perturbations excited by large earthquakes for source characterization
NASA Astrophysics Data System (ADS)
Rolland, Lucie; Lognonné, Philippe; Occhipinti, Giovanni; Kherani, Alam; Crespon, François; Murakami, Makoto
The local state of the ionosphere is now routinely mapped just after strong and shallow earthquakes instrumented by dense Ground Positioning System networks. Actually, for most of these events, ionospheric disturbances are registered in the Total Electronic Content a dozen minutes after the source rupture. In some favourable configurations - that will be reminded here - an integrated "seismo-ionospheric" radiative pattern is visualized. Thus, a dozen minutes after the Hokka¨ - 2003, September 25th - and Honshu - 2007, July 16th - earthquakes, different in ıdo term of magnitudes (8.1 and 6.6 and respectively) and in term of source mechanisms, the two patterns present the same attenuation in the northern part. This directivity was first pointed out by (Calais et al., 1998) and the geomagnetic field was invoked as a possible cause, posing the problem geometry. In our model, we assimilate the observed radiative pattern to a combination of concentric waves, taking into account that the electrons are redistributed under the effect of the acoustic pressure waves, themselves excited by the seismic vertical ground displacements. In other words, we describe how the ionosphere interacts with an acoustic pulse propagating in the atmosphere up to the ionosphere, with a special highlight on the influence of the geomagnetic field. A 3-dimensional model is developed according to the ionospheric coupling model of E.A. Kherani et al., 2008. The geometry of the acoustic pulse is modelled with a ray tracing method and the horizontal component of the propagation provides an explication to the attenuation. A final inversion allows us to derive the parameters of the source. [E. Calais et al., 1998] Ionospheric signature of surface mine blasts from Global Positioning System measurements, Geophys. J. Int., vol. 132, pp.191-202 [E. A. Kherani et al., submitted] "Response of the Ionosphere to the seismic triggered acoustic waves: electron density and electromagnetic fluctuations," Geophys
NASA Astrophysics Data System (ADS)
Ngo, D.; Huang, Y.; Rosakis, A.; Griffith, W. A.; Pollard, D. D.
2009-12-01
Motivated by the occurrence of high-angle pseudotachylite injection veins along exhumed faults, we use optical experiments and high-speed photography to interpret the origins of tensile fractures that form during dynamic shear rupture in laboratory experiments. Sub-Rayleigh (slower than the Rayleigh wave speed) shear ruptures in Homalite-100 produce damage zones consisting of a periodic array of tensile cracks. These cracks nucleate and grow within cohesive zones behind the tips of shear ruptures that propagate dynamically along interfaces with frictional and cohesive strength. The tensile cracks are produced only along one side of the interface where transient, fault-parallel, tensile stress perturbations are associated with the growing shear rupture tip. We use an analytical, linear velocity weakening, rupture model to examine the local nature of the dynamic stress field in the vicinity of the tip of the main shear rupture which grows along a weak plane (fault) with sub-Rayleigh speed. It is this stress field which is responsible for driving the off-fault mode-I microcracks that grow during the experiments. We show that (1) the orientation of the cracks can be explained by this analytical model; and (2) the cracks can be used to simultaneously constrain the constitutive behavior of the shear rupture tip. In addition, we propose an extension of this model to explain damage structures observed along exhumed faults. Results of this study represent an important bridge between geological observations of structures preserved along exhumed faults, laboratory experiments and theoretical models of earthquake propagation, potentially leading to diagnostic criteria for interpreting velocity, directivity, and static pre-stress state associated with past earthquakes on exhumed faults.
NASA Astrophysics Data System (ADS)
Stein, R. S.
2012-12-01
The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by
NASA Astrophysics Data System (ADS)
Trofimenko, S. V.; Bykov, V. G.; Merkulova, T. V.
2016-07-01
In this paper, we aimed to investigate the statistical distributions of shallow earthquakes with 2 ≤ M ≤ 4, located in 13 rectangular areas (clusters) bounded by 120°E and 144°E along the northern boundary of the Amurian microplate. As a result of our study, the displacement of seismicity maxima has been determined and three recurrent spatial cycles have been observed. The clusters with similar distribution of earthquakes are suggested to alternate being equally spaced at 7.26° (360-420 km). A comparison of investigation results on the structure of seismicity in various segments of the Amurian microplate reveals the identity between the alternation pattern observed for meridional zones of large earthquakes and a distinguished spatial period. The displacement vector for seismicity in the annual cycles is determined, and the correspondence between its E-W direction and the displacement of the fronts of large earthquakes is established. The elaborated model of seismic and deformation processes is considered, in which subsequent activation of clusters of weak earthquakes (2 ≤ M ≤ 4), tending to extend from the Japanese-Sakhalin island arc to the eastern closure of the Baikal rift zone, is initiated by the displacement of the strain wave front.
Landes, François P; Lippiello, E
2016-05-01
The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.
Landes, François P; Lippiello, E
2016-05-01
The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics. PMID:27300821
NASA Astrophysics Data System (ADS)
Landes, François P.; Lippiello, E.
2016-05-01
The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.
Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.
2012-01-01
In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.
Analysis of self-organized criticality in the Olami-Feder-Christensen model and in real earthquakes
Caruso, F.; Vinciguerra, S.
2007-05-15
We perform an analysis on the dissipative Olami-Feder-Christensen model on a small world topology considering avalanche size differences. We show that when criticality appears, the probability density functions (PDFs) for the avalanche size differences at different times have fat tails with a q-Gaussian shape. This behavior does not depend on the time interval adopted and is found also when considering energy differences between real earthquakes. Such a result can be analytically understood if the sizes (released energies) of the avalanches (earthquakes) have no correlations. Our findings support the hypothesis that a self-organized criticality mechanism with long-range interactions is at the origin of seismic events and indicate that it is not possible to predict the magnitude of the next earthquake knowing those of the previous ones.
NASA Astrophysics Data System (ADS)
McGinty, Peter; Darby, Desmond; Haines, John
2001-11-01
During the 1930s the Hawke's Bay region of New Zealand experienced four large earthquakes, Napier (MW 7.6) and Hawke Bay (MW 7.3) in 1931, Wairoa (MW 6.9) in 1932, and Pahiatua (MW 7.4) in 1934. We address the question of whether these comprise a triggered sequence of events. There are significant difficulties in dealing with earthquakes that were recorded 70 years ago as fault parameters are difficult to obtain. With the exception of the Pahiatua earthquake, no primary surface fault ruptures were identified, and locations for the other three events may be in error by tens of kilometers. However, geodetic data were collected before and after the Napier and Wairoa earthquakes, and regions of uplift and subsidence from the former have been mapped from low-order leveling data. This information helps to constrain the fault parameters for the first of these events through elastostatic modeling. Results from recent teleseismic body wave modeling have been used to determine fault parameters for the Hawke Bay event. Our analysis of the induced static stresses with the Coulomb failure criterion shows that the Napier earthquake triggered both the Hawke Bay and Wairoa earthquakes but that the Hawke Bay earthquake probably delayed the Wairoa earthquake. We also conclude that these three events did not trigger the Pahiatua earthquake.
NASA Astrophysics Data System (ADS)
Moradi, M.; Delavar, M. R.; Moradi, A.
2015-12-01
Being one of the natural disasters, earthquake can seriously damage buildings, urban facilities and cause road blockage. Post-earthquake route planning is problem that has been addressed in frequent researches. The main aim of this research is to present a route planning model for after earthquake. It is assumed in this research that no damage data is available. The presented model tries to find the optimum route based on a number of contributing factors which mainly indicate the length, width and safety of the road. The safety of the road is represented by a number of criteria such as distance to faults, percentage of non-standard buildings and percentage of high buildings around the route. An integration of genetic algorithm and ordered weighted averaging operator is employed in the model. The former searches the problem space among all alternatives, while the latter aggregates the scores of road segments to compute an overall score for each alternative. Ordered weighted averaging operator enables the users of the system to evaluate the alternative routes based on their decision strategy. Based on the proposed model, an optimistic user tries to find the shortest path between the two points, whereas a pessimistic user tends to pay more attention to safety parameters even if it enforces a longer route. The results depicts that decision strategy can considerably alter the optimum route. Moreover, post-earthquake route planning is a function of not only the length of the route but also the probability of the road blockage.
NASA Astrophysics Data System (ADS)
Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark
2016-04-01
Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.
Numerical model for the evaluation of Earthquake effects on a magmatic system.
NASA Astrophysics Data System (ADS)
Garg, Deepak; Longo, Antonella; Papale, Paolo
2016-04-01
A finite element numerical model is presented to compute the effect of an Earthquake on the dynamics of magma in reservoirs with deformable walls. The magmatic system is hit by a Mw 7.2 Earthquake (Petrolia/Capo Mendocina 1992) with hypocenter at 15 km diagonal distance. At subsequent times the seismic wave reaches the nearest side of the magmatic system boundary, travels through the magmatic fluid and arrives to the other side of the boundary. The modelled physical system consists in the magmatic reservoir with a thin surrounding layer of rocks. Magma is considered as an homogeneous multicomponent multiphase Newtonian mixture with exsolution and dissolution of volatiles (H2O+CO2). The magmatic reservoir is made of a small shallow magma chamber filled with degassed phonolite, connected by a vertical dike to a larger deeper chamber filled with gas-rich shoshonite, in condition of gravitational instability. The coupling between the Earthquake and the magmatic system is computed by solving the elastostatic equation for the deformation of the magmatic reservoir walls, along with the conservation equations of mass of components and momentum of the magmatic mixture. The characteristic elastic parameters of rocks are assigned to the computational domain at the boundary of magmatic system. Physically consistent Dirichlet and Neumann boundary conditions are assigned according to the evolution of the seismic signal. Seismic forced displacements and velocities are set on the part of the boundary which is hit by wave. On the other part of boundary motion is governed by the action of fluid pressure and deviatoric stress forces due to fluid dynamics. The constitutive equations for the magma are solved in a monolithic way by space-time discontinuous-in-time finite element method. To attain additional stability least square and discontinuity capturing operators are included in the formulation. A partitioned algorithm is used to couple the magma and thin layer of rocks. The
GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake
NASA Astrophysics Data System (ADS)
Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.
2011-10-01
The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided
The 1999 Izmit, Turkey, earthquake: A 3D dynamic stress transfer model of intraearthquake triggering
Harris, R.A.; Dolan, J.F.; Hartleb, R.; Day, S.M.
2002-01-01
Before the August 1999 Izmit (Kocaeli), Turkey, earthquake, theoretical studies of earthquake ruptures and geological observations had provided estimates of how far an earthquake might jump to get to a neighboring fault. Both numerical simulations and geological observations suggested that 5 km might be the upper limit if there were no transfer faults. The Izmit earthquake appears to have followed these expectations. It did not jump across any step-over wider than 5 km and was instead stopped by a narrower step-over at its eastern end and possibly by a stress shadow caused by a historic large earthquake at its western end. Our 3D spontaneous rupture simulations of the 1999 Izmit earthquake provide two new insights: (1) the west- to east-striking fault segments of this part of the North Anatolian fault are oriented so as to be low-stress faults and (2) the easternmost segment involved in the August 1999 rupture may be dipping. An interesting feature of the Izmit earthquake is that a 5-km-long gap in surface rupture and an adjacent 25° restraining bend in the fault zone did not stop the earthquake. The latter observation is a warning that significant fault bends in strike-slip faults may not arrest future earthquakes.
Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region
NASA Astrophysics Data System (ADS)
Gülen, L.; Wp2 Team
2010-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare
NASA Astrophysics Data System (ADS)
Stramondo, S.; Tesauro, M.; Briole, P.; Sansosti, E.; Salvi, S.; Lanari, R.; Anzidei, M.; Baldi, P.; Fornaro, G.; Avallone, A.; Buongiorno, M. F.; Franceschetti, G.; Boschi, E.
The largest events of the 1997 Umbria-Marche seismic sequence were the two September 26 earthquakes of Mw = 5.7 (00:33 GMT) and Mw = 6.0 (09:40 GMT), which caused severe damage and ground cracks in a wide area around the epicenters. We created an ERS-SAR differential interferogram, where nine fringes are visible in and around the Colfiorito basin, corresponding to 25 cm of coseismic surface displacement. GPS data show a maximum horizontal displacement of (14±1.8) cm and a maximum subsidence of (24±3) cm. We used these geodetic data and the seismological parameters to estimate geometry and slip distribution on the fault planes. Modeled fault depths and maximum slip amplitudes are 6.5 km and 47 cm for the first event and 7 km and 72 cm for the second one, in good agreement with those derived from the seismological data.
NASA Astrophysics Data System (ADS)
Grzemba, B.; Popov, V. L.; Starcevic, J.; Popov, M.
2012-04-01
Shallow earthquakes can be considered as a result of tribological instabilities, so called stick-slip behaviour [1,2], meaning that sudden slip occurs at already existing rupture zones. From a contact mechanics point of view it is clear, that no motion can arise completely sudden, the material will always creep in an existing contact in the load direction before breaking loose. If there is a measureable creep before the instability, this could serve as a precursor. To examine this theory in detail, we built up an elementary laboratory model with pronounced stick-slip behaviour. Different material pairings, such as steel-steel, steel-glass and marble-granite, were analysed at different driving force rates. The displacement was measured with a resolution of 8 nm. We were able to show that a measureable accelerated creep precedes the instability. Near the instability, this creep is sufficiently regular to serve as a basis for a highly accurate prediction of the onset of macroscopic slip [3]. In our model a prediction is possible within the last few percents of the preceding stick time. We are hopeful to extend this period. Furthermore, we showed that the slow creep as well as the fast slip can be described very well by the Dieterich-Ruina-friction law, if we include the contribution of local contact rigidity. The simulation meets the experimental curves over five orders of magnitude. This friction law was originally formulated for rocks [4,5] and takes into account the dependency of the coefficient of friction on the sliding velocity and on the contact history. The simulations using the Dieterich-Ruina-friction law back up the observation of a universal behaviour of the creep's acceleration. We are working on several extensions of our model to more dimensions in order to move closer towards representing a full three-dimensional continuum. The first step will be an extension to two degrees of freedom to analyse the interdependencies of the instabilities. We also plan
Tsunami Modeling to Validate Slip Models of the 2007 M w 8.0 Pisco Earthquake, Central Peru
NASA Astrophysics Data System (ADS)
Ioualalen, M.; Perfettini, H.; Condo, S. Yauri; Jimenez, C.; Tavera, H.
2013-03-01
Following the 2007, August 15th, M w 8.0, Pisco earthquake in central Peru, Sladen et al. (J Geophys Res 115: B02405, 2010) have derived several slip models of this event. They inverted teleseismic data together with geodetic (InSAR) measurements to look for the co-seismic slip distribution on the fault plane, considering those data sets separately or jointly. But how close to the real slip distribution are those inverted slip models? To answer this crucial question, the authors generated some tsunami records based on their slip models and compared them to DART buoys, tsunami records, and available runup data. Such an approach requires a robust and accurate tsunami model (non-linear, dispersive, accurate bathymetry and topography, etc.) otherwise the differences between the data and the model may be attributed to the slip models themselves, though they arise from an incomplete tsunami simulation. The accuracy of a numerical tsunami simulation strongly depends, among others, on two important constraints: (i) A fine computational grid (and thus the bathymetry and topography data sets used) which is not always available, unfortunately, and (ii) a realistic tsunami propagation model including dispersion. Here, we extend Sladen's work using newly available data, namely a tide gauge record at Callao (Lima harbor) and the Chilean DART buoy record, while considering a complete set of runup data along with a more realistic tsunami numerical that accounts for dispersion, and also considering a fine-resolution computational grid, which is essential. Through these accurate numerical simulations we infer that the InSAR-based model is in better agreement with the tsunami data, studying the case of the Pisco earthquake indicating that geodetic data seems essential to recover the final co-seismic slip distribution on the rupture plane. Slip models based on teleseismic data are unable to describe the observed tsunami, suggesting that a significant amount of co-seismic slip may have
NASA Astrophysics Data System (ADS)
Kedar, S.; Bock, Y.; Moore, A. W.; Argus, D. F.; Fang, P.; Liu, Z.; Haase, J. S.; Su, L.; Owen, S. E.; Goldberg, D.; Squibb, M. B.; Geng, J.
2015-12-01
Postseismic deformation indicates a viscoelastic response of the lithosphere. It is critical, then, to identify and estimate the extent of postseismic deformation in both space and time, not only for its inherent information on crustal rheology and earthquake physics, but also since it must considered for plate motion models that are derived geodetically from the "steady-state" interseismic velocities, models of the earthquake cycle that provide interseismic strain accumulation and earthquake probability forecasts, as well as terrestrial reference frame definition that is the basis for space geodetic positioning. As part of the Solid Earth Science ESDR System) SESES project under a NASA MEaSUREs grant, JPL and SIO estimate combined daily position time series for over 1800 GNSS stations, both globally and at plate boundaries, independently using the GIPSY and GAMIT software packages, but with a consistent set of a prior epoch-date coordinates and metadata. The longest time series began in 1992, and many of them contain postseismic signals. For example, about 90 of the global GNSS stations out of more than 400 that define the ITRF have experienced one or more major earthquakes and 36 have had multiple earthquakes; as expected, most plate boundary stations have as well. We quantify the spatial (distance from rupture) and temporal (decay time) extent of postseismic deformation. We examine parametric models (log, exponential) and a physical model (rate- and state-dependent friction) to fit the time series. Using a PCA analysis, we determine whether or not a particular earthquake can be uniformly fit by a single underlying postseismic process - otherwise we fit individual stations. Then we investigate whether the estimated time series velocities can be directly used as input to plate motion models, rather than arbitrarily removing the apparent postseismic portion of a time series and/or eliminating stations closest to earthquake epicenters.
Geomechanical modeling of the nucleation process of Australia's 1989 M5.6 Newcastle earthquake
NASA Astrophysics Data System (ADS)
Klose, Christian D.
2007-04-01
Inherent to black-coal mining in New South Wales (Australia) since 1801, the discharge of ground water may have triggered the M5.6 Newcastle earthquake in 1989. 4-dimensional geomechanical model simulations reveal that widespread water removal and coal as deep as a 500 m depth resulted in an unload of the Earth's crust. This unload caused a destabilization process of the pre-existing Newcastle fault in the interior of the crust beneath the Newcastle coal field. In tandem, an increase in shear stress and a decrease in normal stress may have reactivated this reverse fault. Over the course of the last fifty years, elevated levels of lithostatic stress alterations have accelerated. In 1991, based on the modeling of the crust's elastostatic response to the unload, there has been the minimal critical shear stress changes of 0.01 Mega Pascal (0.1 bar) that reached the Newcastle fault at a depth where the 1989 mainshock nucleated. Hence, it can be anticipated that other faults might also be critically stressed in that region for a couple of reasons. First, the size of the area (volume) that is affected by the induced stress changes is larger than the ruptured area of the Newcastle fault. Second, the seismic moment magnitude of the 1989 M5.6 Newcastle earthquake is associated with only a fraction of mass removal (1 of 55), following McGarr's mass-moment relationship. Lastly, these findings confirm ongoing seismicity in the Newcastle region since the beginning of the 19th century after a dormant period of 10,000 years of no seismicity.
Rupture model of the 2011 Mineral, Virginia, earthquake from teleseismic and regional waveforms
Hartzell, Stephen; Mendoza, Carlos; Zeng, Yuehua
2013-01-01
We independently invert teleseismic P waveforms and regional crustal phases to examine the finite fault slip model for the 2011 Mw 5.8 Mineral, Virginia, earthquake. Theoretical and empirical Green's functions are used for the teleseismic and regional models, respectively. Both solutions show two distinct sources each about 2 km across and separated by 2.5 km. The source at the hypocenter is more localized in the regional model leading to a higher peak slip of 130 cm and higher average stress drop of 250 bars compared with 86 cm and 150 bars for the same source in the teleseismic model. Both sources are centered at approximately 8 km depth in the regional model, largely below the aftershock distribution. In the teleseismic model, the sources extend updip to approximately 6 km depth, into the depth range of the aftershocks. The rupture velocity is not well resolved but appears to be near 2.7 km/s.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Gomba, Giorgio; Eineder, Michael
2016-04-01
The use of L-band InSAR data for observing the surface displacements caused by earthquakes can be very beneficial. The retrieved signal is generally more stable against temporal phase decorrelation with respect to C-band and X-band InSAR data, such that fault movements also in vegetated areas can be observed. Also, due to the longer wavelength, larger displacement gradients that occur close to the ruptures can be measured. A serious draw back of L-band data on the other hand is that it more strongly reacts to heterogeneities in the ionosphere. The spatial variability of the electron content causes spatially long wavelength trends in the interferometric phase, distorts the surface deformation signal therefore impacts on the earthquake source analysis. A well-known example of the long-wavelength distortions are the ALOS-1 InSAR observations of the 2008 Wenchuan earthquake. To mitigate the effect of ionospheric phase in the geodetic modelling of earthquake sources, a common procedure is to remove any obvious linear or quadratic trend in the surface displacement data that may have been caused by ionospheric phase delays. Additionally, remaining trends may be accounted for by including so-called ambiguity (or nuisance) parameters in the modelling. The introduced ionospheric distortion, however, is only approximated arbitrarily by such simple ramp functions with the true ionospheric phase screen unknown. As a consequence, either a remaining ionospheric signal may be mistaken for surface displacement or, the other way around, long-wavelength surface displacement may be attributed to ionospheric distortion and is removed. The bias introduced to the source modelling results by the assumption of linear or quadratic ionospheric effects is therefore unknown as well. We present a more informed and physics-based correction of the surface displacement data in earthquake source modelling by using a split-spectrum method to estimate the ionospheric phase screen superimposed to the
NASA Astrophysics Data System (ADS)
Noda, H.; Nakatani, M.; Hori, T.
2012-12-01
Seismological observations [e.g., Abercrombie and Rice, 2005] suggest that a larger earthquake has larger fracture energy Gc. One way to realize such scaling is to assume a hierarchical patchy distribution of Gc on a fault; there are patches of different sizes with different Gc so that a larger patch has larger Gc. Ide and Aochi [2005] conducted dynamic rupture simulations with such a distribution of weakening distance Dc in a linear slip-weakening law, initiating ruptures on the smallest patch which sometimes grow up by cascading into a larger scale. They suggested that the initial phase of a large earthquake is indistinguishable from that of a small earthquake. In the present study we simulate a similar multi-scale asperity model but following rate and state friction (RSF), where stress and strength distribution resulting from the history of coseismic and aseismic slip influences the way of rupture initiation, growth, and arrest of a forthcoming earthquake. Multi-scale asperities were represented by a distribution of the state evolution distance dc in the aging version of RSF evolution law. Numerical scheme adopted [Noda and Lapsuta, 2010] is fully dynamic and 3D. We have modeled a circular rate-weakening patch, Patch L (radius R), which has a smaller patch, Patch S (radius r), in it by the rim. The ratio of the radii α = R/r is the amount of the gap between two scales. Patch L and Patch S respectively have nucleation sizes Rc and rc. The same brittleness β = R/Rc = r/rc is assumed for simplicity. We shall call an earthquake which ruptures only Patch S as an S-event, and one which ruptures Patch L, an L-event. We have conducted a series of simulations with α from 2 to 5 while keeping β = 3 until the end of the 20th L-event. If the patch S was relatively large (α = 2 and 2.5), only L-events occurred and they always dynamically cascaded up from a patch S rupture following small quasi-static nucleation there. If the patch S was small enough (α = 5), in
Atomic parity nonconservation, neutron radii, and effective field theories of nuclei
Sil, Tapas; Centelles, M.; Vinas, X.; Piekarewicz, J.
2005-04-01
Accurately calibrated effective field theories are used to compute atomic parity nonconserving (APNC) observables. Although accurately calibrated, these effective field theories predict a large spread in the neutron skin of heavy nuclei. Whereas the neutron skin is strongly correlated to numerous physical observables, in this contribution we focus on its impact on new physics through APNC observables. The addition of an isoscalar-isovector coupling constant to the effective Lagrangian generates a wide range of values for the neutron skin of heavy nuclei without compromising the success of the model in reproducing well-constrained nuclear observables. Earlier studies have suggested that the use of isotopic ratios of APNC observables may eliminate their sensitivity to atomic structure. This leaves nuclear structure uncertainties as the main impediment for identifying physics beyond the standard model. We establish that uncertainties in the neutron skin of heavy nuclei are at present too large to measure isotopic ratios to better than the 0.1% accuracy required to test the standard model. However, we argue that such uncertainties will be significantly reduced by the upcoming measurement of the neutron radius in {sup 208}Pb at the Jefferson Laboratory.
Assessing the nonconservative fluvial fluxes of dissolved organic carbon in North America
NASA Astrophysics Data System (ADS)
Lauerwald, Ronny; Hartmann, Jens; Ludwig, Wolfgang; Moosdorf, Nils
2012-03-01
Fluvial transport of dissolved organic carbon (DOC) is an important link in the global carbon cycle. Previous studies largely increased our knowledge of fluvial exports of carbon to the marine system, but considerable uncertainty remains about in-stream/in-river losses of organic carbon. This study presents an empirical method to assess the nonconservative behavior of fluvial DOC at continental scale. An empirical DOC flux model was trained on two different subsets of training catchments, one with catchments smaller than 2,000 km2 (n = 246, avg. 494 km2) and one with catchments larger than 2,000 km2 (n = 207, avg. 26,525 km2). A variety of potential predictors and controlling factors of fluvial DOC fluxes is discussed. The predictors retained for the final DOC flux models are runoff, slope gradient, land cover, and areal proportions of wetlands. According to the spatially explicit extrapolation of the models, in North America south of 60°N, the total fluvial DOC flux from small catchments (25.8 Mt C a-1, std. err.: 12%) is higher than that from large catchments (19.9 Mt C a-1, std. err.: 10%), giving a total DOC loss of 5.9 Mt C a-1 (std. err.: 78%). As DOC losses in headwaters are not represented in this budget, the estimated DOC loss is rather a minimum value for the total DOC loss within the fluvial network.
Demand surge following earthquakes
Olsen, Anna H.
2012-01-01
Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.
NASA Astrophysics Data System (ADS)
Attanayake, Januka; Fonseca, João F. B. D.
2016-05-01
The February 22nd 2006 Mw = 7 Machaze earthquake is one of the largest, if not the largest, earthquakes reported since 1900 within Continental Africa. This large continental intraplate event has important implications to our understanding of tectonics and strong ground motion prediction locally and in the global context. Thus, accurate estimates of source parameters of this earthquake are important. In this study, we inverted the complete azimuthally distributed high frequency (0.05-2 Hz) P waveform dataset available for a best-fitting point source model and obtained stress drop estimates assuming different theoretical rupture models from spectral fitting. Our best-fitting point source model confirms steep normal faulting, has strike = 173° (309°), dip = 73° (23°), rake = -72° (-132°), and shows a 12%-4% improvement in waveform fit compared to previous models, which translates into an error minimization. We attribute this improvement to higher order reverberations near the source region that we took in to account and the excellent azimuthal coverage of the dataset. Preferred stress drop estimates assuming a rupture velocity = 0.9 x shear wave velocity (Vs) are between 11 and 15 MPa though, even higher stress drop estimates are possible for rupture velocities lower than 0.9Vs. The estimated stress drop is significantly higher than the global stress drop average of intraplate earthquakes, but is consistent with stress drop estimated for some intra-continental earthquakes elsewhere. The detection of a new active structure that appears to terminate in Machaze, its step-like geometry, and lithospheric strength all favors a hypothesis of stress concentration in the source region, which is likely the cause of this event and the higher than average stress drop.
NASA Astrophysics Data System (ADS)
Levin, Shoshana Z.; Sammis, Charles G.; Bowman, David D.
2006-02-01
We test the Bowman and King [Bowman, D.D., King, G.C.P., 2001a, Accelerating seismicity and stress accumulation before large earthquakes. Geophys. Res. Lett., 28 (21), 4039-4042, Bowman, D.D., King, G.C.P., 2001b. Stress transfer and seismicity changes before large earthquakes. C. R. Acad. Sci. Paris, 333, 591-599] Stress Accumulation model by examining the evolution of seismicity rates prior to the 1992 Landers, California earthquake. The Stress Accumulation (SA) model was developed to explain observations of accelerating seismicity preceding large earthquakes. The model proposes that accelerating seismicity sequences result from the tectonic loading of large fault structures through aseismic slip in the elasto-plastic lower crust. This loading progressively increases the stress on smaller faults within a critical region around the main structure, thereby causing the observed acceleration of precursory activity. A secondary prediction of the SA model is that the precursory seismicity rates should increase first at the edges of the critical region, with the rates gradually rising over time at closer distances to the main fault. We test this prediction by examining year-long seismicity rates between 1960 and 2004, as a function of distance from the Landers rupture. To quantify the significance of trends in the seismicity rates, we auto-correlate the data, using a range of spatial and temporal lags. We find weak evidence for increased seismicity rates propagating towards the Landers rupture, but cannot conclusively distinguish these results from those obtained for a random earthquake catalog. However, we find a strong indication of periodicity in the rate fluctuations, as well as high correlation between activity 130-170 km from Landers and seismicity rates within 50 km of the Landers rupture temporally offset 1.5-2 years. The implications of this spatio-temporal correlation will be addressed in future studies.
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
NASA Astrophysics Data System (ADS)
Sagiya, T.
2013-12-01
Before the 2011 M9.0 Tohoku-oki earthquake, rapid subsidence more than 5mm/yr has been observed along the Pacific coast of the Tohoku area by leveling, tide gauges, and GPS (Kato, 1979, Kato and Tsumura, 1979, El-Fiky and Kato, 1999). On the other hand, Stage 5e (~125 ka) marine terraces are widely recognized along the same area, implying the area is uplifting in a long-term. Ikeda (1999) hypothesized that these deformation signals reflect accumulation of elastic strain at the plate interface and there is a possibility of a giant earthquake causing a coastal uplift. However, the coastal area subsided as large as 1m during the 2011 main shock. Though we observe significant postseismic uplift, it is not certain if the preseismic as well as coseismic subsidence will be recovered. We construct a simple model of earthquake deformation cycle to interpret the vertical movement along the Pacific coast of northeast Japan. The model consists of a 40 km thick elastic lithosphere overlying a Maxwell viscoelastic asthenospher with a viscosity of 10^19 Pa s. Plate boundary is modeled as two rectangular faults located in the lithosphere and connected each other. As for the kinematic conditions of these faults, we represent the temporal evolution of fault slip as a sum of the steady term and the perturbation term following Savage and Prescott (1978). The first steady term corresponds to the long-term plate subduction, which contributes to long-term geomorphic evolution such as the marine terraces (Hashimoto et al., 2004). The second perturbation term represent earthquake cycle effects. We evaluate this effect under assumptions that earthquake occurrence is perfectly periodic, plate interface is fully coupled during interseismic periods, and the slip deficit is fully released by earthquakes. If the earthquake recurrence interval is shorter than the relaxation time of the structure, interseismic movement is in the opposite direction to the coseismic ones and changes almost linearly
Irregularities in Early Seismic Rupture Propagation for Large Events in a Crustal Earthquake Model
NASA Astrophysics Data System (ADS)
Lapusta, N.; Rice, J. R.; Rice, J. R.
2001-12-01
We study early seismic propagation of model earthquakes in a 2-D model of a vertical strike-slip fault with depth-variable rate and state friction properties. Our model earthquakes are obtained in fully dynamic simulations of sequences of instabilities on a fault subjected to realistically slow tectonic loading (Lapusta et al., JGR, 2000). This work is motivated by results of Ellsworth and Beroza (Science, 1995), who observe that for many earthquakes, far-field velocity seismograms during initial stages of dynamic rupture propagation have irregular fluctuations which constitute a "seismic nucleation phase". In our simulations, we find that such irregularities in velocity seismograms can be caused by two factors: (1) rupture propagation over regions of stress concentrations and (2) partial arrest of rupture in neighboring creeping regions. As rupture approaches a region of stress concentration, it sees increasing background stress and its moment acceleration (to which velocity seismographs in the far field are proportional) increases. After the peak in stress concentration, the rupture sees decreasing background stress and moment acceleration decreases. Hence a fluctuation in moment acceleration is created. If rupture starts sufficiently far from a creeping region, then partial arrest of rupture in the creeping region causes a decrease in moment acceleration. As the other parts of rupture continue to develop, moment acceleration then starts to grow again, and a fluctuation again results. Other factors may cause the irregularities in moment acceleration, e.g., phenomena such as branching and/or intermittent rupture propagation (Poliakov et al., submitted to JGR, 2001) which we have not studied here. Regions of stress concentration are created in our model by arrest of previous smaller events as well as by interactions with creeping regions. One such region is deep in the fault zone, and is caused by the temperature-induced transition from seismogenic to creeping
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Zhiming; Wicks, C.
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ??? 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ??? 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30-60 kPa.
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Zhong; Wicks, Chuck
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.
Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.
2006-01-01
Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.
Asperity characteristics of the Olami-Feder-Christensen model of earthquakes
Kawamura, Hikaru; Yamamoto, Takumi; Kotani, Takeshi; Yoshino, Hajime
2010-03-15
Properties of the Olami-Feder-Christensen (OFC) model of earthquakes are studied by numerical simulations. The previous study indicated that the model exhibited 'asperity'-like phenomena, i.e., the same region ruptures many times near periodically [T. Kotani et al., Phys. Rev. E 77, 010102(R) (2008)]. Such periodic or characteristic features apparently coexist with power-law-like critical features, e.g., the Gutenberg-Richter law observed in the size distribution. In order to clarify the origin and the nature of the asperity-like phenomena, we investigate here the properties of the OFC model with emphasis on its stress distribution. It is found that the asperity formation is accompanied by self-organization of the highly concentrated stress state. Such stress organization naturally provides the mechanism underlying our observation that a series of asperity events repeat with a common epicenter site and with a common period solely determined by the transmission parameter of the model. Asperity events tend to cluster both in time and in space.
NASA Astrophysics Data System (ADS)
Shibazaki, B.; Tsutsumi, A.; Shimamoto, T.; Noda, H.
2012-12-01
Some observational studies [e.g. Hasegawa et al., 2011] suggested that the 2011 great Tohoku-oki Earthquake (Mw 9.0) released roughly all of the accumulated elastic strain on the plate interface owing to considerable weakening of the fault. Recent studies show that considerable weakening can occur at a high slip velocity because of thermal pressurization or thermal weakening processes [Noda and Lapusta, 2010; Di Toro et al., 2011]. Tsutsumi et al. [2011] examined the frictional properties of clay-rich fault materials under water-saturated conditions and found that velocity weakening or strengthening occurs at intermediate slip velocities and that dramatic weakening occurs at high slip velocities. This dramatic weakening at higher slip velocities is caused by pore-fluid pressurization via frictional heating or gouge weakening. In the present study, we investigate the generation mechanism of megathrust earthquakes along the Japan trench by performing 3D quasi-dynamic modeling with high-speed friction or thermal pressurization. We propose a rate- and state-dependent friction law with two state variables that exhibit weak velocity weakening or strengthening with a small critical displacement at low to intermediate velocities, but a strong velocity weakening with a large critical displacement at high slip velocities [Shibazaki et al., 2011]. We use this friction law for 3D quasi-dynamic modeling of a cycle of the great Tohoku-oki earthquake. We set several asperities where velocity weakening occurs at low to intermediate slip velocities. Outside of the asperities, velocity strengthening occurs at low to intermediate slip velocities. At high slip velocities, strong velocity weakening occurs both within and outside of the asperities. The rupture of asperities occurs at intervals of several tens of years, whereas megathrust events occur at much longer intervals (several hundred years). Megathrust slips occur even in regions where velocity strengthening occurs at low to
Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Nyst, M.
2013-12-01
We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.
NASA Astrophysics Data System (ADS)
Shi, Zheqiang
This thesis examines dynamic ruptures along frictional interfaces and seismic radiation in models of earthquake faults separating similar and dissimilar solids with the goal of advancing the understanding of earthquake physics. The dynamics of Mode-II rupture along an interface governed by slip-weakening friction between dissimilar solids are investigated. The results show that the wrinkle-like rupture along such interfaces evolves to unilateral propagation in the slip direction of the compliant side for a broad range of conditions, and the closer the initial shear stress is to the static friction the smaller degree of material contrast is needed for this evolution to occur. Transition of the wrinkle-like pulse to crack-like rupture occurs when the reduction of friction coefficient is sufficiently large. Energy partition associated with various rupture modes along an interface governed by rate- and state-dependent friction between identical solids is investigated. The rupture mode changes with varying velocity dependence of friction, strength excess parameter and length of the nucleation zone. High initial shear stress and weak velocity dependence of friction favor crack-like ruptures, while the opposite conditions favor the pulse-like mode. The rupture mode can switch from a subshear single pulse to a supershear train of pulses when the width of the nucleation zone is increased. The elastic strain energy released over the same propagation distance by the different rupture modes has the order: supershear crack, subshear crack, supershear train-of-pulses and subshear single-pulse. General considerations and observations suggest that the subshear pulse and supershear crack are, respectively, the most and least common modes of earthquake ruptures. The effect of plasticity and interface elasticity on dynamic frictional sliding along an interface induced by edge impact loading between two identical elastic-viscoplastic solids is analyzed. The material on each side is
An earthquake instability model based on faults containing high fluid-pressure compartments
Lockner, D.A.; Byerlee, J.D.
1995-01-01
results of a one-dimensional dynamic Burridge-Knopoff-type model to demonstrate various aspects of the fluid-assisted fault instability described above. In the numerical model, the fault is represented by a series of blocks and springs, with fault rheology expressed by static and dynamic friction. In addition, the fault surface of each block has associated with it pore pressure, porosity and permeability. All of these variables are allowed to evolve with time, resulting in a wide range of phenomena related to fluid diffusion, dilatancy, compaction and heating. These phenomena include creep events, diffusion-controlled precursors, triggered earthquakes, foreshocks, aftershocks, and multiple earthquakes. While the simulations have limitations inherent to 1-D fault models, they demonstrate that the fluid compartment model can, in principle, provide the rich assortment of phenomena that have been associated with earthquakes. ?? 1995 Birkha??user Verlag.
Teamwork tools and activities within the hazard component of the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
NASA Astrophysics Data System (ADS)
Shiraishi, H.; Sasaka, K.; Hamamoto, H.; Hachinohe, S.; Ishiyama, T.
2011-12-01
Most of Japanese local governments have been estimating whole picture of quake damage under scenario earthquakes to reduce both casualties and physical damage. Saitama prefectural government, which is adjacent to north of Tokyo, have already made the estimation four times since 1970's. This estimation requires precise mathematical models of subsurface structures for calculating ground surface accelerations during massive quakes. The models have been updated with every new research. In the early models, the shallow layers had been created with applying geological layers of typical 241 types to the whole prefecture. On the other hand, in the current models, the shallow layers are created with the results of drilling surveys under public works. This update on the models allows us to estimate the quake damage more precisely in every 250m square throughout the prefecture. However even the current models are not complete yet. Because the drilling surveys have not been done enough in rural areas compared with urban areas. The models of shallow layers in rural areas have therefore been created by interpolating with considering terrains among the locations of drilling surveys. Thereby accuracy of the models depends on that of the interpolations. Against this background authors have examined the accuracy of the models by making comparisons of phase velocity dispersions between observed velocities through spatial autocorrelation (SPAC) technique and calculated velocities from the models. Two types of SPAC arrays with radii of 3m, 30m are deployed and data acquisition time is 30min for each array. The result shows that the subsurface structures of urban areas are well modeled, because both the dispersion curves are almost agreed, furthermore amplitude responses of the models are in good agreement with the responses determined by the results of microtremor survey method (MSM). In contrast, the subsurface structures of rural areas include cases that have not been modeled with
Short-term earthquake forecasting based on an epidemic clustering model
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2016-04-01
The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations
Aagaard, Brad T.; Graves, Robert W.; Rodgers, Arthur; Brocher, Thomas M.; Simpson, Robert W.; Dreger, Douglas; Petersson, N. Anders; Larsen, Shawn C.; Ma, Shuo; Jachens, Robert C.
2010-01-01
We simulate long-period (T>1.0–2.0 s) and broadband (T>0.1 s) ground motions for 39 scenario earthquakes (Mw 6.7–7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault, we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions, compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area, with about 50% of the urban area experiencing modified Mercalli intensity VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland earthquake and the 2007 Mw 5.45 Alum Rock earthquake show that the U.S. Geological Survey’s Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area for Hayward fault earthquakes, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions for the suite of scenarios exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute much of this difference to the seismic velocity structure in the San Francisco Bay area and how the NGA models account for basin amplification; the NGA relations may underpredict amplification in shallow sedimentary basins. The simulations also suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by increasing the areal extent of rupture directivity with period.
Dewey, J.W.
1991-01-01
Joint epicenter determination of earthquakes that occurred in northern Algeria near Ech Cheliff (named Orleansville in 1954 and El Asnam in 1980) shows that the earthquake of 9 September 1954 (M=6.5) occurred at nearly the same location as the earthquake of 10 October 1980 (M=7.3). The 1954 main shock and earliest aftershocks were concentrated close to the boundaries of segment B (nomenclature of Deschamps et al., 1982; King and Yielding, 1984) of the 1980 fault system, which was to experience approximately 8 m of slip in the 1980 earthquake. Later aftershocks of the 1954 earthquake were spread over a broad area, notably in a region north of the 1980 fault system that also experienced many aftershocks to the 1980 earthquake. The closeness of the 1954 main shock and earliest aftershocks to the 1980 segment B implies that the 1954 earthquake involved either 1) rupture of segment B proper, or 2) rupture of a distinct fault in the hanging wall of footwall block of segment B. -from Author
Simpson, Robert W.
1994-01-01
If there is a single theme that unifies the diverse papers in this chapter, it is the attempt to understand the role of the Loma Prieta earthquake in the context of the earthquake 'machine' in northern California: as the latest event in a long history of shocks in the San Francisco Bay region, as an incremental contributor to the regional deformation pattern, and as a possible harbinger of future large earthquakes. One of the surprises generated by the earthquake was the rather large amount of uplift that occurred as a result of the reverse component of slip on the southwest-dipping fault plane. Preearthquake conventional wisdom had been that large earthquakes in the region would probably be caused by horizontal, right-lateral, strike-slip motion on vertical fault planes. In retrospect, the high topography of the Santa Cruz Mountains and the elevated marine terraces along the coast should have provided some clues. With the observed ocean retreat and the obvious uplift of the coast near Santa Cruz that accompanied the earthquake, Mother Nature was finally caught in the act. Several investigators quickly saw the connection between the earthquake uplift and the long-term evolution of the Santa Cruz Mountains and realized that important insights were to be gained by attempting to quantify the process of crustal deformation in terms of Loma Prieta-type increments of northward transport and fault-normal shortening.
PAGER--Rapid assessment of an earthquake?s impact
Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.
2010-01-01
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.
Interseismic Coupling Models and their interactions with the Sources of Large and Great Earthquakes
NASA Astrophysics Data System (ADS)
Chlieh, M.; Perfettini, H.; Avouac, J. P.
2009-04-01
Recent observations of heterogeneous strain build up reported from subduction zones and seismic sources of large and great interplate earthquakes indicate that seismic asperities are probably persistent features of the megathrust. The Peru Megathrust produce recurrently large seismic events like the 2001 Mw 8.4, Arequipa earthquake or the 2007 Mw 8.0, Pisco earthquake. The peruvian subduction zone provide an exceptional opportunity to understand the eventual relationship between interseismic coupling, large megathrust ruptures and the frictional properties of the megathrust. An emerging concept is a megathrust with strong locked fault patches surrounded by aseismic slip. The 2001, Mw 8.4 Arequipa earthquake ruptured only the northern portion of the patch that had ruptured already during the great 1868 Mw~8.8 earthquake and that had remained locked in the interseismic period. The 2007 Mw 8.0 Pisco earthquake ruptured the southern portion of the 1746 Mw~8.5 event. The moment released in 2007 amounts to only a small fraction of the deficit of moment that had accumulated since the 1746 great earthquake. Then, the potential for future large megathrust events in Central and Southern Peru area remains large. These recent earthquakes indicate that a same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or jointly to produce a larger rupture. The spatial distribution of frictional properties of the megathrust could be the cause for a more complex earthquakes sequence from one seismic cycle to another. The subduction of geomorphologic structure like the Nazca ridge could be the cause for a lower coupling there.
Rupture to the Trench in Dynamic Models of the Tohoku-Oki Earthquake
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Dunham, E. M.
2011-12-01
The devastating tsunami caused by the 11 March 2011 Tohoku-Oki earthquake was much larger than conventional thinking would suggest. One of the primary reasons for the large tsunami was the ~5-10 m of seafloor uplift resulting from the inferred ~60 m of slip that extended to the trench axis along the base of the accretionary prism. However, the prism is believed to store negligible strain energy and the fault along its base is thought to be frictionally stable. Both factors suggest this region is not capable of sustaining rupture, and that earthquakes should stop well before the trench. Dynamic rupture simulations, which solve for rupture history and elastodynamic response in a fully consistent manner, provide a powerful tool for probing this seeming inconsistency. We have developed a 2-D model based on the detailed structure of the Japan trench. The model is comprised of multiple, irregularly shaped material blocks including the accretionary prism, oceanic crust layers, and mantle and crustal blocks. The fault response is modeled using rate-and-state friction. Inelastic deformation of the off-fault material, particularly in the low-strength prism, is captured using Drucker-Prager plasticity. The governing equations are solved with a provably stable, high-order finite difference method that handles complex geometries through the use of coordinate transforms. Both the initial stresses and frictional parameters vary with depth. Preliminary simulations suggest that the rupture can reach the trench even if the fault interface along the base of the accretionary prism is frictionally stable. Seismic waves released during updip rupture propagation on the shallowly dipping fault reflect off the seafloor back to the fault. These waves carry stress perturbations that unclamp the fault and transiently reduce fault strength. We also highlight the important role of material contrast across the fault. At sufficient depth, the oceanic crust becomes more compliant than the
Parity nonconservation in Fr-like actinide and Cs-like rare-earth-metal ions
NASA Astrophysics Data System (ADS)
Roberts, B. M.; Dzuba, V. A.; Flambaum, V. V.
2013-07-01
Parity-nonconservation (PNC) amplitudes are calculated for the 7s-6d3/2 transitions of the francium isoelectronic sequence (Fr, Ra+, Ac2+, Th3+, Pa4+, U5+, and Np6+) and for the 6s-5d3/2 transitions of the cesium isoelectronic sequence (Cs, Ba+, La2+, Ce3+, and Pr4+). We show in particular that isotopes of La2+, Ac2+, and Th3+ ions have strong potential in the search for new physics beyond the standard model: The PNC amplitudes are large, the calculations are accurate, and the nuclei are practically stable. In addition, 232Th3+ ions have recently been trapped and cooled [Campbell , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.102.233004 102, 233004 (2009)]. We also extend previous works by calculating the s-s PNC transitions in Ra+ and Ba+ and provide calculations of several energy levels, and electric dipole and quadrupole transition amplitudes for the Fr-like actinide ions.
Nonconservative extension of Keplerian integrals and a new class of integrable system
NASA Astrophysics Data System (ADS)
Roa, Javier
2016-09-01
The invariance of the Lagrangian under time translations and rotations in Kepler's problem yields the conservation laws related to the energy and angular momentum. Noether's theorem reveals that these same symmetries furnish generalized forms of the first integrals in a special nonconservative case, which approximates various physical models. The system is perturbed by a biparametric acceleration with components along the tangential and normal directions. A similarity transformation reduces the biparametric disturbance to a simpler uniparametric forcing along the velocity vector. The solvability conditions of this new problem are discussed, and closed-form solutions for the integrable cases are provided. Thanks to the conservation of a generalized energy, the orbits are classified as elliptic, parabolic, and hyperbolic. Keplerian orbits appear naturally as particular solutions to the problem. After characterizing the orbits independently, a unified form of the solution is built based on the Weierstrass elliptic functions. The new trajectories involve fundamental curves such as cardioids and logarithmic, sinusoidal, and Cotes' spirals. These orbits can represent the motion of particles perturbed by solar radiation pressure, of spacecraft with continuous thrust propulsion, and some instances of Schwarzschild geodesics. Finally, the problem is connected with other known integrable systems in celestial mechanics.
Non-conservation of global charges in the Brane Universe and baryogenesis
NASA Astrophysics Data System (ADS)
Dvali, Gia; Gabadadze, Gregory
1999-08-01
We argue that global charges, such as baryon or lepton number, are not conserved in theories with the Standard Model fields localized on the brane which propagates in higher-dimensional space-time. The global-charge non-conservation is due to quantum fluctuations of the brane surface. These fluctuations create ``baby branes'' that can capture some global charges and carry them away into the bulk of higher-dimensional space. Such processes are exponentially suppressed at low-energies, but can be significant at high enough temperatures or energies. These effects can lead to a new, intrinsically high-dimensional mechanism of baryogenesis. Baryon asymmetry might be produced due either to ``evaporation'' into the baby branes, or creation of the baryon number excess in collisions of two Brane Universes. As an example we discuss a possible cosmological scenario within the recently proposed ``Brane Inflation'' framework. Inflation is driven by displaced branes which slowly fall on top of each other. When the branes collide inflation stops and the Brane Universe reheats. During this non-equilibrium collision baryon number can be transported from one brane to another one. This results in the baryon number excess in our Universe which exactly equals to the hidden ``baryon number'' deficit in the other Brane Universe. © 1999
Tripathi, Swarnendu; Garcìa, Angel E; Makhatadze, George I
2015-10-15
Charge-charge interactions play an important role in thermal stability of proteins. We employed an all-atom, native-topology-based model with non-native electrostatics to explore the interplay between folding dynamics and stability of TNfn3 (the third fibronectin type III domain from tenascin-C). Our study elucidates the role of charge-charge interactions in modulating the folding energy landscape. In particular, we found that incorporation of explicit charge-charge interactions in the WT TNfn3 induces energetic frustration due to the presence of residual structure in the unfolded state. Moreover, optimization of the surface charge-charge interactions by altering the evolutionarily nonconserved residues not only increases the thermal stability (in agreement with previous experimental study) but also reduces the formation of residual structure and hence minimizes the energetic frustration along the folding route. We concluded that charge-charge interaction in the rationally designed TNfn3 plays an important role not only in enhancing the stability but also in assisting folding. PMID:26413861
Sepinsky, J. F.; Willems, B.; Kalogera, V.; Rasio, F. A.
2009-09-10
We investigate the secular evolution of the orbital semimajor axis and eccentricity due to mass transfer in eccentric binaries, allowing for both mass and angular momentum loss from the system. Adopting a delta function mass transfer rate at the periastron of the binary orbit, we find that, depending on the initial binary properties at the onset of mass transfer, the orbital semimajor axis and eccentricity can either increase or decrease at a rate linearly proportional to the magnitude of the mass transfer rate at periastron. The range of initial binary mass ratios and eccentricities that lead to increasing orbital semimajor axes and eccentricities broadens with increasing degrees of mass loss from the system and narrows with increasing orbital angular momentum loss from the binary. Comparison with tidal evolution timescales shows that the usual assumption of rapid circularization at the onset of mass transfer in eccentric binaries is not justified, irrespective of the degree of systemic mass and angular momentum loss. This work extends our previous results for conservative mass transfer in eccentric binaries and can be incorporated into binary evolution and population synthesis codes to model non-conservative mass transfer in eccentric binaries.
Self-organized criticality occurs in non-conservative neuronal networks during `up' states
NASA Astrophysics Data System (ADS)
Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst
2010-10-01
During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.
Rolling vs. Sliding: The inclusion of non-conservative work in the classic comparison
NASA Astrophysics Data System (ADS)
Chediak, Alex; Buehler, Terry
2010-03-01
A semester-long mechanics course typically covers moment of inertia, angular velocity, and rolling. A classic comparison is made between rolling without slipping and sliding without friction. In either case, no non-conservation work is performed---all the gravitational potential energy that the rolling or sliding object possesses at the top of the incline plane is converted into kinetic energy. In the case of the sliding object, the kinetic energy term is simply .5ex1 -.1em/ -.15em.25ex2 mv^2. In the case of the rolling object, the kinetic energy term is .5ex1 -.1em/ -.15em.25ex2 mv^2 + .5ex1 -.1em/ -.15em.25ex2 Iφ^2. The friction here is static not kinetic, so it does no mechanical work. Since the sliding object has no angular velocity, its linear velocity is greater than that of the rolling object, and it reaches the bottom of the track faster. But if a rolling and sliding object, each of the same material, were to race down an incline plane, which would win? The answer depends on the effective coefficient of friction, C. If C > μs, which will occur at angles approaching 90^o, the rolling object slips. And if C < μk, the rolling object has a greater linear acceleration and wins the race to the bottom. Experimental results to verify a theoretical model (including the dependency on incline angle) will also be presented.
NASA Astrophysics Data System (ADS)
Kim, M. J.; Segall, P.; Johnson, K. M.
2012-12-01
Most recent models of interseismic deformation in Cascadia have been restricted to elastic half-spaces. In this study, we investigate the interseismic deformation in the Cascadia subduction zone using a viscoelastic earthquake cycle model in order to constrain the extent of plate coupling, elastic plate thickness, and the viscoelastic relaxation time. Our model of the plate interface consists of an elastic layer overlying a Maxwell viscoelastic half-space. The fault in the elastic layer is composed of a fully locked zone that slips during megathrust events at cycle time T= 500 years, and a transition zone where the interseismic slip rate changes from zero (fully coupled) to the plate velocity (zero coupling). Slip deficit within the transition zone is accommodated by either coseismic or rapid post-seismic slip. We model the slip rate in the transition zone using the analytic solution for slip at a constant resistive stress in an elastic full space. We explore ranges for the 4 model parameters: the elastic plate thickness, the relaxation time, and the upper and the lower bounds of the transition zone - that minimize the residual between the predicted surface velocities and the observed GPS data. GPS position solutions were provided by PANGA and our data consists of 29 GPS station velocities from 2002 to 2010 in the Olympic Peninsula - southern Vancouver Island region, since this region is least affected by forearc rotation. Our preliminary result suggests a shallow fully locked zone (< 15 km depth) with a short relaxation time (< 100 years) compared to the recurrence interval (~ 500 years). For a given degree of misfit to the data, accounting for the viscoelastic effect allows deeper locking depth compared to the fully elastic model.
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.
2014-12-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of
Irrera, Alessia; Magazzù, Alessandro; Artoni, Pietro; Simpson, Stephen H; Hanna, Simon; Jones, Philip H; Priolo, Francesco; Gucciardi, Pietro Giuseppe; Maragò, Onofrio M
2016-07-13
We measure, by photonic torque microscopy, the nonconservative rotational motion arising from the transverse components of the radiation pressure on optically trapped, ultrathin silicon nanowires. Unlike spherical particles, we find that nonconservative effects have a significant influence on the nanowire dynamics in the trap. We show that the extreme shape of the trapped nanowires yields a transverse component of the radiation pressure that results in an orbital rotation of the nanowire about the trap axis. We study the resulting motion as a function of optical power and nanowire length, discussing its size-scaling behavior. These shape-dependent nonconservative effects have implications for optical force calibration and optomechanics with levitated nonspherical particles. PMID:27280642
Gospodinov, D. K.; Marekova, E. G.; Marinov, A. T.
2007-04-23
A RETAS model is used to analyze aftershock rate decay after the Denaly Fault earthquake with a main shock magnitude MS=7.9. We verify different variants of the RETAS model ranging from the limit case Mth = Mmain (main shock) to the case when Mth=Mo (lower magnitude cut-off). We first test the model on simulated data following the MOF (modified Omori formula) model. The results for the Denali Fault sequence reveal the best fit model to be RETAS with a triggering threshold Mth =3.2.
Post Test Analysis of a PCCV Model Dynamically Tested Under Simulated Design-Basis Earthquakes
Cherry, J.; Chokshi, N.; James, R.J.; Rashid, Y.R.; Tsurumaki, S.; Zhang, L.
1998-11-09
In a collaborative program between the United States Nuclear Regulatory Commission (USNRC) and the Nuclear Power Engineering Corporation (NUPEC) of Japan under sponsorship of the Ministry of International Trade and Ihdustry, the seismic behavior of Prestressed Concrete Containment Vessels (PCCV) is being investigated. A 1:10 scale PCCV model has been constructed by NUPEC and subjected to seismic simulation tests using the high performance shaking table at the Tadotsu Engineering Laboratory. A primary objective of the testing program is to demonstrate the capability of the PCCV to withstand design basis earthquakes with a significant safety margin against major damage or failure. As part of the collaborative program, Sandia National Laboratories (SNL) is conducting research in state-of-the-art analytical methods for predicting the seismic behavior of PCCV structures, with the eventual goal of understanding, validating, and improving calculations dated to containment structure performance under design and severe seismic events. With the increased emphasis on risk-informed- regulatory focus, more accurate ch&@erization (less uncertainty) of containment structural and functional integri~ is desirable. This paper presents results of post-test calculations conducted at ANATECH to simulate the design level scale model tests.
Probabilistic earthquake early warning in complex earth models using prior sampling
NASA Astrophysics Data System (ADS)
Valentine, Andrew; Käufl, Paul; Trampert, Jeannot
2016-04-01
In an earthquake early warning (EEW) context, we must draw inferences from small, noisy seismic datasets within an extremely limited time-frame. Ideally, a probabilistic framework would be used, to recognise that available observations may be compatible with a range of outcomes, and analysis would be conducted in a theoretically-complete physical framework. However, implementing these requirements has been challenging, as they tend to increase computational demands beyond what is feasible on EEW timescales. We present a new approach, based on 'prior sampling', which implements probabilistic inversion as a two stage process, and can be used for EEW monitoring within a given region. First, a large set of synthetic data is computed for randomly-distributed seismic sources within the region. A learning algorithm is used to infer details of the probability distribution linking observations and model parameters (including location, magnitude, and focal mechanism). This procedure is computationally expensive, but can be conducted entirely before monitoring commences. In the second stage, as observations are obtained, the algorithm can be evaluated within milliseconds to output a probabilistic representation of the corresponding source model. We demonstrate that this gives robust results, and can be implemented using state-of-the-art 3D wave propagation simulations, and complex crustal structures.
NASA Astrophysics Data System (ADS)
Grevemeyer, I.; Arroyo, I. G.
2015-12-01
Earthquake source locations are generally routinely constrained using a global 1-D Earth model. However, the source location might be associated with large uncertainties. This is definitively the case for earthquakes occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust earthquake in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large earthquakes along the Middle America Trench off central Costa Rica. The earthquake sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone earthquake. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity models. The 3-D model was either derived from 3-D tomography based on onshore stations and a priori model based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity model derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.
The Aleutian Tsunami of 1946: the Compound Earthquake-Landslide Source and Near-Field Modeling
NASA Astrophysics Data System (ADS)
Fryer, G. J.; Yamazaki, Y.; McMurtry, G. M.
2015-12-01
The tsunami of April 1, 1946, spread death and destruction throughout the Pacific from the Aleutians to Antarctica, and produced exceptional runup, 42 m, at Scotch Cap on Unimak Island in the near field. López & Okal (2006) showed that the triggering earthquake was at least MW = 8.6, large enough to explain the far-field tsunami but still requiring a landslide or other secondary source to achieve the local runup. No convincing landslide was found until von Huene, et al (2014) merged all available multibeam data and reprocessed a old multichannel line to show that a feature on the Aleutian Terrace they call Lone Knoll (LK) is the displaced block of a translational slide. From 210Pb dating of push cores taken near the summit of LK, we find that a disruption in sedimentation occurred in 1946 at one site, but sedimentation was not disrupted at another site nearby. We infer that the slide block moved coherently at a speed close to the threshold for erosion of the hemipelagic clays. From GLORIA sidescan, Fryer, et al (2004) had earlier tentatively identified LK as a landslide deposit, but if the tsunami crossed the shallow Aleutian Shelf at the long-wave speed, that landslide had to extend up to the shelf edge to satisfy the known 48-min travel time to Scotch Cap. The resulting landslide was enormous, and a multibeam survey later in 2004 showed that it could not exist. The slide imaged by von Huene, et al is far smaller, with a headwall 30 km downslope at a depth of 3 km. The greater distance demands that the tsunami travel much faster across the shelf. The huge runup, however, suggests that wave height was a significant fraction of the water depth (only 80 m), so the tsunami probably crossed the Aleutian Shelf as a bore. From modeling the landslide-generated tsunami with a shock-capturing dispersive code we infer that it did indeed cross the shelf as a bore traveling at roughly twice the long-wave speed. We are still exploring the dependence of the tsunami on slide
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Nakamura, T.; Miyoshi, T.
2015-12-01
May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.
NASA Astrophysics Data System (ADS)
Chan, Chung-Han
2016-09-01
This study provides some new insights into earthquake forecasting models that are applied to regions with subduction systems, including the depth component for forecasting grids and time-dependent factors. To demonstrate the importance of depth component, a forecasting approach, which incorporates three-dimensional grids, is compared with an approach with two-dimensional cells. Through application to the two subduction regions, Ryukyu and Kanto, it is shown that the approaches with three-dimensional grids always demonstrate a better forecasting ability. I thus confirm the importance of depth dependency for forecasting, especially for applications to a subduction environment or a region with non-vertical seismogenic structures. In addition, this study discusses the role of time-dependent factors for forecasting models and concludes that time dependency only becomes crucial during the period with significant seismicity rate change that follows a large earthquake.
NASA Astrophysics Data System (ADS)
Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.
2004-12-01
The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better
NASA Astrophysics Data System (ADS)
Gulen, L.; EMME WP2 Team*
2011-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the GEM (Global Earthquake Model) project (http://www.emme-gem.org/). The EMME project covers Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project consists of three main modules: hazard, risk, and socio-economic modules. The EMME project uses PSHA approach for earthquake hazard and the existing source models have been revised or modified by the incorporation of newly acquired data. The most distinguishing aspect of the EMME project from the previous ones is its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that permits continuous update, refinement, and analysis. An up-to-date earthquake catalog of the Middle East region has been prepared and declustered by the WP1 team. EMME WP2 team has prepared a digital active fault map of the Middle East region in ArcGIS format. We have constructed a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. The EMME project database includes information on the geometry and rates of movement of faults in a "Fault Section Database", which contains 36 entries for each fault section. The "Fault Section" concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far 6,991 Fault Sections have been defined and 83,402 km of faults are fully parameterized in the Middle East region. A separate "Paleo-Sites Database" includes information on the timing and amounts of fault
NASA Astrophysics Data System (ADS)
Pollock, S. J.; Welliver, M. C.
2000-01-01
Parity-violating electron scattering (PVES) could provide a unique means to determine spatial neutron distributions and their moments in heavy nuclei. Knowledge of the neutron distribution is of fundamental interest for nuclear structure models, and the first moment is of special interest for atomic parity experiments. We have examined what could be learned from a hypothetical measurement of the parity-violating asymmetry in elastic electron scattering on barium and lead nuclei (both spin-0 and N≠Z). We find that a single measurement of this quantity could determine the rms neutron radius to within a couple of percent, to be compared with the 5-10% existing uncertainties. We also compute the quantitative connection to atomic parity nonconservation, and the resulting limits on possible low energy Standard Model tests which could be achieved.
Stein, Ross S.
2007-01-01
Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).
NASA Astrophysics Data System (ADS)
Gunawan, Endra; Meilano, Irwan; Abidin, Hasanuddin Z.; Hanifa, Nuraini Rahma; Susilo
2016-03-01
We investigate three available coseismic fault models of the 2006 M7.8 Java tsunami earthquake, as reported by Fujii and Satake (2006), Bilek and Engdahl (2007), and Yagi and Fukahata (2011), in order to find the best coseismic model based on mechanisms of postseismic deformation associated with viscoelastic relaxation and afterslip. We construct a preliminary rheological model using vertical data, obtaining a final rheological model after we include horizontal and vertical components of afterslip in the further process. Our analysis indicates that the coseismic fault model of Fujii and Satake (2006) provides a better and more realistic result for a rheological model than the others. The best-fit rheological model calculated using the coseismic fault model of Fujii and Satake (2006) comprises a 60 ± 5 km elastic layer thickness with a viscosity of 2.0 ± 1.0 × 1017 Pa s in the asthenosphere. Also, we find that afterslip is dominant over the horizontal displacements, while viscoelastic relaxation is dominant over the vertical displacement. Additionally, in comparison to the coseismic displacement found through GPS data taken at BAKO station, our calculation indicates that Fujii and Satake (2006) modeled coseismic displacements with less GPS data misfit than the other examined models. Finally, we emphasize that our methodology for evaluating the best coseismic fault model can satisfactorily explain the postseismic deformation of the 2006 Java tsunami earthquake.
Aagaard, Brad T.; Graves, Robert W.; Schwartz, David P.; Ponce, David A.; Graymer, Russell W.
2010-01-01
We construct kinematic earthquake rupture models for a suite of 39 Mw 6.6-7.2 scenario earthquakes involving the Hayward, Calaveras, and Rodgers Creek faults. We use these rupture models in 3D ground-motion simulations as discussed in Part II (Aagaard et al., 2010) to provide detailed estimates of the shaking for each scenario. We employ both geophysical constraints and empirical relations to provide realistic variation in the rupture dimensions, slip heterogeneity, hypocenters, rupture speeds, and rise times. The five rupture lengths include portions of the Hayward fault as well as combined rupture of the Hayward and Rodgers Creek faults and the Hayward and Calaveras faults. We vary rupture directivity using multiple hypocenters, typically three per rupture length, yielding north-to-south rupture, bilateral rupture, and south-to-north rupture. For each rupture length and hypocenter, we consider multiple random distributions of slip. We use two approaches to account for how aseismic creep might reduce coseismic slip. For one subset of scenarios, we follow the slip-predictable approach and reduce the nominal slip in creeping regions according to the creep rate and time since the most recent earthquake, whereas for another subset of scenarios we apply a vertical gradient to the nominal slip in creeping regions. The rupture models include local variations in rupture speed and use a ray-tracing algorithm to propagate the rupture front. Although we are not attempting to simulate the 1868 Hayward fault earthquake in detail, a few of the scenarios are designed to have source parameters that might be similar to this historical event.
Gorkha earthquake-induced landslides and dammed lakes: Evolution and outburst modeling
NASA Astrophysics Data System (ADS)
Shugar, D. H.; Immerzeel, W.; Wanders, N.; Kargel, J. S.; Leonard, G. J.; Haritashya, U. K.; Collins, B. D.
2015-12-01
On 25 April 2015, the Gorkha Earthquake (Mw 7.8) struck Nepal, generating thousands of landslides in Nepal, Tibet (China), and India. While the majority of these hazards were triggered co-seismically, many are considered secondary effects occurring during the weeks following the main shock, based on high-resolution WorldView satellite imagery. Here we report on a series of shallow, post-seismic landslides into the upper Marsyangdi River in the Annapurna region of the central Nepal Himalayas. These landslides constricted and blocked the river, causing impoundments that presented acute flood risks to communities downstream. On April 27, two days following the main shock, ~4.7 x 104 m3 of water was impounded behind a series of small constrictions. By May 28, the total volume of impounded water had increased to ~6.4 x 105 m3. The downstream flood risk was especially significant in the event of a domino-like cascade of dam breaches. We examine the timing, distribution and evolution of the landslide-dammed lakes and quantify the risk of inundation-scenarios to downstream communities with a hydrological model. The model uses a fully kinematic wave simulation at a 30 m-spatial and 2 sec-temporal resolution to resolve the height, timing and volume of a possible outburst flood wave. Our modeling shows that a rapid dam burst involving only the lowest, largest lake would increase water levels at the nearest village of Lower Pisang ~2 km downstream by >7m in a matter of minutes. Approximately 70 km downstream, the flood wave would be mostly attenuated, raising water levels only tens of centimeters. Fortunately, at the time of writing, no flood had occurred.
Finite-Source Modeling to Constrain Complex Fault Geometry of the South Napa Earthquake
NASA Astrophysics Data System (ADS)
Wooddell, K. E.; Dreger, D. S.; Huang, M. H.
2015-12-01
The August 24, 2014 South Napa Mw6.0 earthquake ruptured to the north along the West Napa fault, producing strong shaking in the deep sediments (~1000 m) of the Napa Valley. Aftershock locations define the roughly north-northwest striking fault plane, but details regarding the dip of the fault and the strike of the northernmost section of the fault remain less certain. Preliminary inversions based on the inversion of broadband data from the BDSN, PBO GPS, and InSAR observations show a 13 km long rupture initiating at a depth of 11 km and propagating unilaterally to the northwest and up-dip (Dreger et al., 2015), however this kinematic finite-source model assumes a single planar fault. Field reconnaissance and LiDAR imaging reveals a complex system of sub-parallel faults at the surface. The main western branch, which had the largest surface offsets, shows a marked change in strike approximately 9 km north of the epicenter (Bray et al., 2014). In addition, there is evidence of surface offsets on eastern branches of the West Napa fault system (Bray et al., 2014). Whether this complexity persists, or how the faults may link at depth, is an open question. However, based on the limited preliminary dataset of Dreger et al (2015), we conclude that the fault dip is nearly vertical. In this study we investigate complex fault models that account for changing strike, and en echelon west dipping faults that may join at hypocentral depth by means of finite-source inversion of regional broadband, local strong-motion and GPS and InSAR geodetic data sets. We build on the data set used in the prelimary model of Dreger et al, (2015) by incorporating local strong motion waveform data as well as the static displacement dataset of Funning (2015). For the strong motion data, we incorporate stations located on "rock" (Vs30 > 700 m/s). Green's functions for the "rock" stations are computed with a modified Gil7 model to account for the lower velocity in the upper 30 meters. Preliminary
NASA Astrophysics Data System (ADS)
McBride, S.; Tilley, E. N.; Johnston, D. M.; Becker, J.; Orchiston, C.
2015-12-01
This research evaluates the public education earthquake information prior to the Canterbury Earthquake sequence (2010-present), and examines communication learnings to create recommendations for improvement in implementation for these types of campaigns in future. The research comes from a practitioner perspective of someone who worked on these campaigns in Canterbury prior to the Earthquake Sequence and who also was the Public Information Manager Second in Command during the earthquake response in February 2011. Documents, specifically those addressing seismic risk, that were created prior to the earthquake sequence, were analyzed, using a "best practice matrix" created by the researcher, for how closely these aligned to best practice academic research. Readability tests and word counts are also employed to assist with triangulation of the data as was practitioner involvement. This research also outlines the lessons learned by practitioners and explores their experiences in regards to creating these materials and how they perceive these now, given all that has happened since the inception of the booklets. The findings from the research showed these documents lacked many of the attributes of best practice. The overly long, jargon filled text had little positive outcome expectancy messages. This probably would have failed to persuade anyone that earthquakes were a real threat in Canterbury. Paradoxically, it is likely these booklets may have created fatalism in publics who read the booklets. While the overall intention was positive, for scientists to explain earthquakes, tsunami, landslides and other risks to encourage the public to prepare for these events, the implementation could be greatly improved. This final component of the research highlights points of improvement for implementation for more successful campaigns in future. The importance of preparedness and science information campaigns can be not only in preparing the population but also into development of
Phase Transformations and Earthquakes
NASA Astrophysics Data System (ADS)
Green, H. W.
2011-12-01
Phase transformations have been cited as responsible for, or at least involved in, "deep" earthquakes for many decades (although the concept of "deep" has varied). In 1945, PW Bridgman laid out in detail the string of events/conditions that would have to be achieved for a solid/solid transformation to lead to a faulting instability, although he expressed pessimism that the full set of requirements would be simultaneously achieved in nature. Raleigh and Paterson (1965) demonstrated faulting during dehydration of serpentine under stress and suggested dehydration embrittlement as the cause of intermediate depth earthquakes. Griggs and Baker (1969) produced a thermal runaway model of a shear zone under constant stress, culminating in melting, and proposed such a runaway as the origin of deep earthquakes. The discovery of Plate Tectonics in the late 1960s established the conditions (subduction) under which Bridgman's requirements for earthquake runaway in a polymorphic transformation could be possible in nature and Green and Burnley (1989) found that instability during the transformation of metastable olivine to spinel. Recent seismic correlation of intermediate-depth-earthquake hypocenters with predicted conditions of dehydration of antigorite serpentine and discovery of metastable olivine in 4 subduction zones, suggests strongly that dehydration embrittlement and transformation-induced faulting are the underlying mechanisms of intermediate and deep earthquakes, respectively. The results of recent high-speed friction experiments and analysis of natural fault zones suggest that it is likely that similar processes occur commonly during many shallow earthquakes after initiation by frictional failure.
NASA Astrophysics Data System (ADS)
Böse, Maren; Graves, Robert W.; Gill, David; Callaghan, Scott; Maechling, Philip J.
2014-09-01
Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0-10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a `proof of concept', being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (˜20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least `moderate', `strong' or `very strong' shaking in the Los Angeles (LA) basin
Seismotectonic Models of the Three Recent Devastating SCR Earthquakes in India
NASA Astrophysics Data System (ADS)
Mooney, W. D.; Kayal, J.
2007-12-01
During the last decade, three devastating earthquakes, the Killari 1993 (Mb 6.3), Jabalpur 1997 (Mb 6.0) and the Bhuj 2001 (Mw 7.7) occurred in the Stable Continental Region (SCR), Peninsular India. First, the September 30, 1993 Killari earthquake (Mb 6.3) occurred in the Deccan province of central India, in the Latur district of Maharashtra state. The local geology in the area is obscured by the late Cretaceous-Eocene basalt flows, referred to as the Deccan traps. This makes it difficult to recognize the geological surface faults that could be associated with the Killari earthquake. The epicentre was reported at 18.090N and 76.620E, and the focal depth at 7 +/- 1 km was precisely estimated by waveform inversion (Chen and Kao, 1995). The maximum intensity reached to VIII and the earthquake caused a loss of about 10,000 lives and severe damage to property. The May 22, 1997 Jabalpur earthquake (Mb 6.0), epicentre at 23.080N and 80.060E, is a well studied earthquake in the Son-Narmada-Tapti (SONATA) seismic zone. A notable aspects of this earthquake is that it was the first significant event in India to be recorded by 10 broadband seismic stations which were established in 1996 by the India Meteorological Department (IMD). The focal depth was well estimated using the "converted phases" of the broadband seismograms. The focal depth was given in the lower crust at a depth of 35 +/- 1 km, similar to the moderate earthquakes reported from the Amazona ancient rift system in SCR of South America. Maximum MSK intensity of the Jabalpur earthquake reached to VIII in the MSK scale and this earthquake killed about 50 people in the Jabalpur area. Finally, the Bhuj earthquake (MW 7.7) of January 26, 2001 in the Gujarat state, northwestern India, was felt across the whole country, and killed about 20,000 people. The maximum intensity level reached X. The epicenter of the earthquake is reported at 23.400N and 70.280E, and the well estimated focal depth at 25 km. A total of about
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
NASA Astrophysics Data System (ADS)
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in
NASA Astrophysics Data System (ADS)
Richter, Tom; Sens-Schönfelder, Christoph; Kind, Rainer; Asch, Günter
2014-06-01
We report on earthquake and temperature-related velocity changes in high-frequency autocorrelations of ambient noise data from seismic stations of the Integrated Plate Boundary Observatory Chile project in northern Chile. Daily autocorrelation functions are analyzed over a period of 5 years with passive image interferometry. A short-term velocity drop recovering after several days to weeks is observed for the Mw 7.7 Tocopilla earthquake at most stations. At the two stations PB05 and PATCX, we observe a long-term velocity decrease recovering over the course of around 2 years. While station PB05 is located in the rupture area of the Tocopilla earthquake, this is not the case for station PATCX. Station PATCX is situated in an area influenced by salt sediment in the vicinity of Salar Grande and presents a superior sensitivity to ground acceleration and periodic surface-induced changes. Due to this high sensitivity, we observe a velocity response of several regional earthquakes at PATCX, and we can show for the first time a linear relationship between the amplitude of velocity drops and peak ground acceleration for data from a single station. This relationship does not hold true when comparing different stations due to the different sensitivity of the station environments. Furthermore, we observe periodic annual velocity changes at PATCX. Analyzing data at a temporal resolution below 1 day, we are able to identify changes with a period of 24 h, too. The characteristics of the seismic velocity with annual and daily periods indicate an atmospheric origin of the velocity changes that we confirm with a model based on thermally induced stress. This comprehensive model explains the lag time dependence of the temperature-related seismic velocity changes involving the distribution of temperature fluctuations, the relationship between temperature, stress and velocity change, plus autocorrelation sensitivity kernels.
NASA Astrophysics Data System (ADS)
Okumura, K.; Rockwell, T. K.; Akciz, S. O.; Wechsler, N.; Aksoy, E. M.; Ishimura, D.
2009-12-01
the past five surface ruptures on a 6th century channel. Together, these data argue for fairly characteristic slip for the past five earthquakes. However, the timing between events ranges from less than two centuries to more than six centuries, and there is no apparent correspondence between the lapse time and the amount of ensuing displacement. These observations argue against both the time- and slip-predictable models of earthquake recurrence.
NASA Astrophysics Data System (ADS)
Domke, H.
2016-11-01
The F- and K-integrals are used to transform the zeroth azimuthal Fourier component of the radiative transfer equation for conservative multiple scattering of polarized light in vertically inhomogeneous plane atmospheres into an equivalent transfer equation with a modified phase matrix corresponding to non-conservative pseudo-scattering. With symmetry properties of the original phase matrix to be retained, the modification generally includes two arbitrary scalar functions depending on optical depth. It is shown that the surface Green's function matrices for conservative scattering can be expressed in terms of surface Green's function matrices for non-conservative pseudo-scattering. Linear constraints are obtained for surface Green's functions for conservative scattering as well as for particular forms of non-conservative pseudo-scattering. Explicit formulae are derived for retrieving the solutions of standard problems like diffuse reflection and transmission, and also for Milne's problem, for conservative inhomogeneous atmospheres by means of appropriate solutions for non-conservative multiple pseudo-scattering. Numerical experiments performed by solving the nonlinear integral equations for the diffuse reflection functions of homogeneous semi-infinite atmospheres demonstrate that an acceleration of iterations by orders of magnitude can be achieved when the transformation to equivalent multiple pseudo-scattering is applied.
Noether symmetries of the nonconservative and nonholonomic systems on time scales
NASA Astrophysics Data System (ADS)
Cai, PingPing; Fu, JingLi; Guo, YongXin
2013-05-01
In this paper we give a new method to investigate Noether symmetries and conservation laws of nonconservative and nonholonomic mechanical systems on time scales {T}, which unifies the Noether's theories of the two cases for the continuous and the discrete nonconservative and nonholonomic systems. Firstly, the exchanging relationships between the isochronous variation and the delta derivatives as well as the relationships between the isochronous variation and the total variation on time scales are obtained. Secondly, using the exchanging relationships, the Hamilton's principle is presented for nonconservative systems with delta derivatives and then the Lagrange equations of the systems are obtained. Thirdly, based on the quasi-invariance of Hamiltonian action of the systems under the infinitesimal transformations with respect to the time and generalized coordinates, the Noether's theorem and the conservation laws for nonconservative systems on time scales are given. Fourthly, the d'Alembert-Lagrange principle with delta derivatives is presented, and the Lagrange equations of nonholonomic systems with delta derivatives are obtained. In addition, the Noether's theorems and the conservation laws for nonholonomic systems on time scales are also obtained. Lastly, we present a new version of Noether's theorems for discrete systems. Several examples are given to illustrate the application of our results.
Stein, R.S.; Yeats, R.S.
1989-06-01
Seismologists generally look for earthquakes to happen along visible fault lines, e.g., the San Andreas fault. The authors maintain that another source of dangerous quakes has been overlooked: the release of stress along a fault that is hidden under a fold in the earth's crust. The paper describes the differences between an earthquake which occurs on a visible fault and one which occurs under an anticline and warns that Los Angeles greatest earthquake threat may come from a small quake originating under downtown Los Angeles, rather than a larger earthquake which occurs 50 miles away at the San Andreas fault.
NASA Astrophysics Data System (ADS)
Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard
2016-04-01
Over the days following the 2015 Nepal earthquake, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This modelling relied on historic losses from other Nepal earthquakes as well as detailed socioeconomic data and earthquake loss information via CATDAT. The modelled results were very close to the final death toll and reconstruction cost for the 2015 earthquake of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal earthquakes in rapid time post-event. The reconstruction cost and death toll model is then used as the base model for the examination of the effect of spending money on earthquake retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical modelling. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-earthquake is also explored as part of this analysis, as well as spending on other components outside of earthquakes. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf
NASA Astrophysics Data System (ADS)
Abe, S.; Place, D.; Mora, P.
2001-12-01
The particle based lattice solid model has been used successfully as a virtual laboratory to simulate the dynamics of faults, earthquakes and gouge processes. The phenomena investigated with the lattice solid model range from the stick-slip behavior of faults, localization phenomena in gouge and the evolution of stress correlation in multi-fault systems, to the influence of rate and state-dependent friction laws on the macroscopic behavior of faults. However, the results from those simulations also show that in order to make a next step towards more realistic simulations it will be necessary to use three-dimensional models containing a large number of particles with a range of sizes, thus requiring a significantly increased amount of computing resources. Whereas the computing power provided by a single processor can be expected to double every 18 to 24 months, parallel computers which provide hundreds of times the computing power are available today and there are several efforts underway to construct dedicated parallel computers and associated simulation software systems for large-scale earth science simulation (e.g. The Australian Computational Earth Systems Simulator[1] and Japanese Earth Simulator[2])". In order to use the computing power made available by those large parallel computers, a parallel version of the lattice solid model has been implemented. In order to guarantee portability over a wide range of computer architectures, a message passing approach based on MPI has been used in the implementation. Particular care has been taken to eliminate serial bottlenecks in the program, thus ensuring high scalability on systems with a large number of CPUs. Measures taken to achieve this objective include the use of asynchronous communication between the parallel processes and the minimization of communication with and work done by a central ``master'' process. Benchmarks using models with up to 6 million particles on a parallel computer with 128 CPUs show that the
NASA Astrophysics Data System (ADS)
Leonard, L. J.; Hyndman, R. D.; Mazzotti, S.
2002-12-01
Coastal estuaries from N. California to central Vancouver Island preserve evidence of the subsidence that has occurred in Holocene megathrust earthquakes at the Cascadia subduction zone (CSZ). Seismic hazard assessments in Cascadia are primarily based on the rupture area of 3-D dislocation models constrained by geodetic data. It is important to test the model by comparing predicted coseismic subsidence with that estimated in coastal marsh studies. Coseismic subsidence causes the burial of soils that are preserved as peat layers in the tidal-marsh stratigraphy. The most recent (1700) event is commonly marked by a peat layer overlain by intertidal mud, often with an intervening sand layer inferred as a tsunami deposit. Estimates of the amount of coseismic subsidence are made using two methods. (1) Contrasts in lithology, macrofossil content, and microfossil assemblages allow elevation changes to be deduced via modern marsh calibrations. (2) Measurements of the subsurface depth of the buried soil, corrected for eustatic sea level rise and interseismic uplift (assessed using a geodetically-constrained elastic dislocation model), provide independent estimates. Further corrections may include postglacial rebound and local tectonics. An elastic dislocation model is used to predict the expected coseismic subsidence, for a magnitude 9 earthquake (assuming 16 m uniform rupture), at the locations of geological subsidence estimates for the 1700 event. From preliminary comparisons, the correlation is remarkably good, corroborating the dislocation model rupture. The model produces a similar N-S trend of coastal subsidence, and for parts of the margin, e.g. N. Oregon and S. Washington, subsidence of similar magnitude (+/- ~ 0.25 m). A significant discrepancy (up to ~ 1.0 m) exists elsewhere, e.g. N. California, S. Oregon, and central Vancouver Island. The discrepancy may arise from measurement uncertainty, uncertainty in the elastic model, the assumption of elastic rather than
Detection and Modeling of the Tsunami Generated by 2013 Okhotsk Deep Focus Earthquake
NASA Astrophysics Data System (ADS)
Williamson, A.; Newman, A. V.; Okal, E. A.
2015-12-01
The May 24, 2013 moment magnitude (MW) 8.3 Sea of Okhotsk deep earthquake is the largest deep focused earthquake on record. This event is the only great (Mw > 8.0) earthquake in the past two decades to rupture at a depth greater than 300 km and is the only deep earthquake to be detected by modern geodetic tools such as DART pressure sensors, continuous GPS, and GRACE. Continuous GPS stations along the Kamchatka Peninsula and Kuril Islands recorded sub-centimeter static displacements including subsidence across the peninsula, inferring a spatially extensive region of uplift within the Sea of Okhotsk likely generating a low-amplitude, but long-wavelength tsunami wave. We use water column height changes recorded at 10 regional DART pressure sensors, to evaluate the detectability and usability of these tsunami sensors in identifying tsunami waves from such deep earthquakes. Of the 10 sites, only 2 were triggered by the event, enabling capture of high-rate (one sample per minute) pressure data. The remaining sites reported at the background rate of 15 minutes per sample. From these sensors we observed sub-centimeter tsunami waves, in general agreement with synthetics computed using normal mode theory, following the framework of Ward (1980). Additionally, despite remaining at the low-frequency background rate, we were able to identify wave periods around 9000 s across multiple DART stations. We will report on our observations of the long-period tsunami waves from these sensors, and our analysis of how they compare to both reported, and our best-fit determination of the Okhotsk earthquake focal mechanism and location.
NASA Astrophysics Data System (ADS)
Hall, L.; Robinson, T.; Duffy, B. G.; Hampton, S.; Gravley, D. M.
2014-12-01
Coseismic landslide modeling of the Fiordland region of New Zealand explores potential triggers for the Green Lake rock avalanche (GLRA). The GLRA, which occurred post-deglaciation ~14,000 years ago, contains 27 km3 of debris, making it the largest identified landslide in New Zealand and one of the largest on Earth. Due to its large volume, the GLRA was most likely coseismically triggered. The only work to- date suggests MM IX-X shaking from an Alpine Fault event initiated collapse. However, as the Alpine Fault is >80 km from the GLRA, such high shaking intensities seem improbable. Coseismic landslide susceptibility was thus modeled using fuzzy logic and GIS for a number of potential earthquake scenarios to identify a more likely trigger. Existing coseismic landslide inventories for the 2003 and 2009 Fiordland earthquakes were used to determine relationships between landslide occurrence, slope angle, proximity to faults and streams, slope position, and shaking intensity. Slope position and proximity to streams were not found to correlate with the formation of landslides, leaving shaking intensity, slope angle, and proximity to faults to be used in the final models. Modeled earthquake scenarios include a M8.0 southern Alpine Fault rupture, a M8.0 Puysegur Trench earthquake, and a M7.0 on the nearby Hauroko Fault. Coseismic landslide susceptibility is highest at Green Lake for the Hauroko Fault earthquake, reaching values of >0.9 compared to ~0.5 and ~0.6 for the Alpine Fault and Puysegur Trench earthquakes. Consequently, we infer that the GLRA was potentially initiated by a large (M~7) earthquake on the Hauroko Fault and not an M8 Alpine Fault earthquake. This suggests that seismic hazard in the Southern Alps is not limited to the plate boundary.
Source Model of the 2007 Bengkulu Earthquake Determined from Tsunami Waveform Analysis
NASA Astrophysics Data System (ADS)
Gusman, A.; Tanioka, Y.
2008-12-01
On September 12, 2007 at 11:10:26 UTC, an earthquake with moment magnitude of 8.4 occurred off the west coast of Sumatra. The epicenter of the earthquake located at 4.52°S- 101.374°E about 130 km southwest of Bengkulu. This earthquake located in Sumatra subduction zone where at least two previous major events of the 1833 (M8.5-9) and the 1797 have ruptured the same plate interface. In this study, we estimate the slip distribution of the 2007 earthquake using tsunami waveforms. By comparing the result with the rupture area of the previous two large earthquakes, the recurrence pattern of large earthquakes in this area can be understood in order to identify the source area of future tsunamigenic earthquake. The tsunami waves generated by the earthquake were recorded by tide gauge stations around Indian Ocean and one DART buoy (Thailand Meteorological Department) deployed in the deep ocean northwest Sumatra. We select tsunami waveforms recorded in Padang, Cocos Islands, and on the DART buoy. The synthetic tsunami waveforms at those three locations are calculated by solving the non linear shallow water equations. With observation data and synthetic waveforms we calculate the slip distribution using non linear inversion method by an iterative process. On the ruptured area, we create a fault segment area of 100 km width by 250 km length and divide it into 10 subfaults. We use single focal mechanism (strike= 327°, slip= 12°, rake= 144°) determined by Global CMT solution for each subfault. The tsunami waveform records can be well explained by a slip distribution with the largest slip amount of 9.4 m located at South West of Pagai Selatan Island on deeper part of the fault. Assuming the rigidity of 4 × 1010 Nm-2, the total seismic moment obtain from the slip amount is 3.65 × 1021 Nm (Mw=8.3) which is consistent with the Global CMT solution on the seismic moment determination of 5.05 × 1021 Nm.
NASA Astrophysics Data System (ADS)
Sasorova, Elena; Levin, Boris
2014-05-01
In the course of the last century a cyclic increasing and decreasing of the Earth's seismic activity (SA) was marked. The variations of the SA for the events with M>=7.0 from 1900 up to date were under study. The two subsets of the worldwide NEIC (USGS) catalog were used: USGS/NEIC from 1973 to 2012 and catalog of the significant worldwide earthquakes (2150 B.C. - 1994 A.D.), compiled by USGS/NEIC from the NOAA agency. The preliminary standardization of magnitudes and elimination of aftershocks from list of events was performed. The entire period of observations was subdivided into 5-year intervals. The temporal distributions of the earthquake (EQ) density and released energy density were calculated separately for the Southern hemisphere (SH), and for the Northern hemisphere (NH) and for eighteen latitudinal belts: 90°-80°N, 80°-70°N, 70°-60°N, 60°-50°N and so on (the size of each belt is equal to 10°). The periods of the SA was compared for different latitudinal belts of the Earth. The peaks and decays of the seismicity do not coincide in time for different latitudinal belts and especially for the belts located in NH and SH. The peaks and decays of the SA for the events (with M>=8) were marked in the temporal distributions of the EQ for all studied latitudinal belts. The two-dimension distributions (over latitudes and over time) of the EQ density and released energy density highlighted that the periods of amplification of the SA are equal to 30-35 years approximately. Next, we check the existence of a non-random component in the EQ occurrence between the NH and the SH. All events were related to the time axis according to their origin time. We take into consideration the set of the EQs in the studied catalog as the sequence of events if each event may have only one of two possible outcome (occurrence in the NH or in the SH). A nonparametric run test was used for testing of hypothesis about an existence the nonrandom component in the examined sequence of
Finite-fault slip model of the 2011 Mw 5.6 Prague, Oklahoma earthquake from regional waveforms
Sun, Xiaodan; Hartzell, Stephen
2014-01-01
The slip model for the 2011 Mw 5.6 Prague, Oklahoma, earthquake is inferred using a linear least squares methodology. Waveforms of six aftershocks recorded at 21 regional stations are used as empirical Green's functions (EGFs). The solution indicates two large slip patches: one located around the hypocenter with a depth range of 3–5.5 km; the other located to the southwest of the epicenter with a depth range from 7.5 to 9.5 km. The total moment of the solution is estimated at 3.37 × 1024 dyne cm (Mw 5.65). The peak slip and average stress drop for the source at the hypocenter are 70 cm and 90 bars, respectively, approximately one half the values for the Mw 5.8 2011 Mineral, Virginia, earthquake. The stress drop averaged over all areas of slip is 16 bars. The relatively low peak slip and stress drop may indicate an induced component in the origin of the Prague earthquake from deep fluid injection.
Finite-fault slip model of the 2011 Mw 5.6 Prague, Oklahoma earthquake from regional waveforms
NASA Astrophysics Data System (ADS)
Sun, Xiaodan; Hartzell, Stephen
2014-06-01
The slip model for the 2011 Mw 5.6 Prague, Oklahoma, earthquake is inferred using a linear least squares methodology. Waveforms of six aftershocks recorded at 21 regional stations are used as empirical Green's functions (EGFs). The solution indicates two large slip patches: one located around the hypocenter with a depth range of 3-5.5 km; the other located to the southwest of the epicenter with a depth range from 7.5 to 9.5 km. The total moment of the solution is estimated at 3.37 × 1024 dyne cm (Mw 5.65). The peak slip and average stress drop for the source at the hypocenter are 70 cm and 90 bars, respectively, approximately one half the values for the Mw 5.8 2011 Mineral, Virginia, earthquake. The stress drop averaged over all areas of slip is 16 bars. The relatively low peak slip and stress drop may indicate an induced component in the origin of the Prague earthquake from deep fluid injection.
Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,
2011-01-01
Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.
NASA Astrophysics Data System (ADS)
Sasmal, Sudipta; Chakrabarti, Sandip Kumar; Palit, Sourav; Chakraborty, Suman; Ghosh, Soujan; Ray, Suman
2016-07-01
We present the nature of perturbations in the propagation characteristics of Very Low Frequency (VLF) signals received at Ionospheric & Earthquake Research Centre (IERC) (Lat. 22.50 ^{o}N, Long. 87.48 ^{o}E) during and prior to the latest strong earthquakes in Nepal on 12 May 2015 at 12:50 pm local time (07:05 UTC) with a magnitude of 7.3 and depth 18 km at southeast of Kodari. The VLF signal emitted from JJI transmitter (22.2kHz) in Japan (Lat. 32.08 ^{o}N, Long. 130.83 ^{o}E) shows strong shifts in sunrise and sunset terminator times towards nighttime beginning three to four days prior to the earthquake. The shift in terminator times is numerically simulated using Long Wavelength Propagation Capability (LWPC) code. Electron density variation as a function of height is calculated for seismically quiet days using the Wait's exponential profile and it matches with the IRI model. The perturbed electron density is calculated using the effective reflection height (h') and sharpness parameter (β) and the rate of ionization due to earthquake is being obtained by the equation of continuity for ionospheric D-layer. We compute the ion production and recombination profiles during seismic and non-seismic conditions incorporating D-region ion chemistry processes and calculate the unperturbed and perturbed electron density profile and ionization rate at different heights which matches with the exponential profile. During the seismic condition, for both the cases, the rate of ionization and the electron density profile differ significantly from the normal values. We interpret this to be due to the seismo-ionospheric coupling processes.
NASA Astrophysics Data System (ADS)
Suryaputra, I. G. N. A.; Santos, I. R.; Huettel, M.; Burnett, W. C.; Dittmar, T.
2015-11-01
The role of submarine groundwater discharge (SGD) in releasing fluorescent dissolved organic matter (FDOM) to the coastal ocean and the possibility of using FDOM as a proxy for dissolved organic carbon (DOC) was investigated in a subterranean estuary in the northeastern Gulf of Mexico (Turkey Point, Florida). FDOM was continuously monitored for three weeks in shallow beach groundwater and in the adjacent coastal ocean. Radon (222Rn) was used as a natural groundwater tracer. FDOM and DOC correlated in groundwater and seawater samples, implying that FDOM may be a proxy of DOC in waters influenced by SGD. A mixing model using salinity as a seawater tracer revealed FDOM production in the high salinity region of the subterranean estuary. This production was probably a result of infiltration and transformation of labile marine organic matter in the beach sediments. The non-conservative FDOM behavior in this subterranean estuary differs from most surface estuaries where FDOM typically behaves conservatively. At the study site, fresh and saline SGD delivered about 1800 mg d-1 of FDOM (quinine equivalents) to the coastal ocean per meter of shoreline. About 11% of this input was related to fresh SGD, while 89% were related to saline SGD resulting from FDOM production within the shallow aquifer. If these fluxes are representative of the Florida Gulf Coast, SGD-derived FDOM fluxes would be equivalent to at least 18% of the potential regional riverine FDOM inputs. To reduce uncertainties related to the scarcity of FDOM data, further investigations of river and groundwater FDOM inputs in Florida and elsewhere are necessary.
ERIC Educational Resources Information Center
Donovan, Neville
1979-01-01
Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1991-01-01
The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.
Hofmann, R.B.
1995-09-01
Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.
Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map
NASA Astrophysics Data System (ADS)
Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.
2013-12-01
Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S
NASA Astrophysics Data System (ADS)
Dahmen, K.; Ben-Zion, Y.; Uhl, J.
2011-12-01
Slowly sheared solid or densely packed granular materials often deform in an intermittent way with slip avalanches. The distribution of sizes follows often a power law over a broad range of sizes. In these cases, universal (i.e. detail-independent) scaling behavior governs the statistics of the slip-avalanches. Under some conditions, there are also "characteristic" statistics associated with enhanced occurrence of system-size events, and long-term mode switching between power law and characteristic behavior. These dynamic regimes can be understood with basic micromechanical model for deformation of solids with only two tuning parameter: weakening and dissipation of elastic stress transfer. For granular materials the packing fraction plays the role of the dissipation parameter and it sets the size of the largest slip avalanche. The model can reproduce observed stress-strain curves, power spectra of acoustic emissions, statistics of slip avalanches, and geometrical properties of slip, with a continuous phase transition from brittle to ductile behavior. Exact universal predictions for the power law exponents of the avalanche size distributions, durations, power spectra of acoustic emissions, and scaling functions are extracted using an analytical mean field theory and renormalization group tools. For granular materials a dynamic phase diagram with solid-like behavior and large slip avalanches at large packing fractions, and fluid-like behavior at lower packing fractions is obtained. The results agree with recent experimental observations and simulations of the statistics of dislocation dynamics in sheared crystals such as ice [1], slip avalanches in sheared granular materials [2], and avalanches in magnetic and fault systems [3,4]. [1] K. A. Dahmen, Y. Ben-Zion, and J.T. Uhl, "A micromechanical model for deformation in solids with universal predictions for stress strain curves and slip avalanches", Physical Review Letters 102, 175501/1-4 (2009). [2] K. A. Dahmen, Y
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
Multi-scale coarse-graining of non-conservative interactions in molecular liquids
Izvekov, Sergei Rice, Betsy M.
2014-03-14
A new bottom-up procedure for constructing non-conservative (dissipative and stochastic) interactions for dissipative particle dynamics (DPD) models is described and applied to perform hierarchical coarse-graining of a polar molecular liquid (nitromethane). The distant-dependent radial and shear frictions in functional-free form are derived consistently with a chosen form for conservative interactions by matching two-body force-velocity and three-body velocity-velocity correlations along the microscopic trajectories of the centroids of Voronoi cells (clusters), which represent the dissipative particles within the DPD description. The Voronoi tessellation is achieved by application of the K-means clustering algorithm at regular time intervals. Consistently with a notion of many-body DPD, the conservative interactions are determined through the multi-scale coarse-graining (MS-CG) method, which naturally implements a pairwise decomposition of the microscopic free energy. A hierarchy of MS-CG/DPD models starting with one molecule per Voronoi cell and up to 64 molecules per cell is derived. The radial contribution to the friction appears to be dominant for all models. As the Voronoi cell sizes increase, the dissipative forces rapidly become confined to the first coordination shell. For Voronoi cells of two and more molecules the time dependence of the velocity autocorrelation function becomes monotonic and well reproduced by the respective MS-CG/DPD models. A comparative analysis of force and velocity correlations in the atomistic and CG ensembles indicates Markovian behavior with as low as two molecules per dissipative particle. The models with one and two molecules per Voronoi cell yield transport properties (diffusion and shear viscosity) that are in good agreement with the atomistic data. The coarser models produce slower dynamics that can be appreciably attributed to unaccounted dissipation introduced by regular Voronoi re-partitioning as well as by larger
Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake
NASA Astrophysics Data System (ADS)
Durukal, E.; Sesetyan, K.; Erdik, M.
2009-04-01
The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings