Science.gov

Sample records for earthquake generation process

  1. Laboratory generated M -6 earthquakes

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  2. Earthquake-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Kamogawa, Masashi

    an ionospheric phenomenon attributed to tsunami, termed tsunamigenic ionospheric hole (TIH) [Kakinami and Kamogwa et al., GRL, 2012]. After the TEC depression accompanying a monoperiodic variation with approximately 4-minute period as an acoustic resonance between the ionosphere and the solid earth, the TIH gradually recovered. In addition, geomagnetic pulsations with the periods of 150, 180 and 210 seconds were observed on the ground in Japan approximately 5 minutes after the mainshock. Since the variation with the period of 180 seconds was simultaneously detected at the magnetic conjugate of points of Japan, namely Australia, field aligned currents along the magnetic field line were excited. The field aligned currents might be excited due to E and F region dynamo current caused by acoustic waves originating from the tsunami. This result implies that a large earthquake generates seismogenic field aligned currents. Furthermore, monoperiodical geomagnetic oscillation pointing to the epicenter of which velocity corresponds to Rayleigh waves occurs. This may occur due to seismogenic arc-current in E region. Removing such magnetic oscillations from the observed data, clear tsunami dynamo effect was found. This result implies that a large EQ generates seismogenic field aligned currents, seismogenic arc-current and tsunami dynamo current which disturb geomagnetic field. Thus, we found the complex coupling process between a large EQ and an ionosphere from the results of Tohoku EQ.

  3. Earthquake mechanism and seafloor deformation for tsunami generation

    Geist, Eric L.; Oglesby, David D.; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Tsunamis are generated in the ocean by rapidly displacing the entire water column over a significant area. The potential energy resulting from this disturbance is balanced with the kinetic energy of the waves during propagation. Only a handful of submarine geologic phenomena can generate tsunamis: large-magnitude earthquakes, large landslides, and volcanic processes. Asteroid and subaerial landslide impacts can generate tsunami waves from above the water. Earthquakes are by far the most common generator of tsunamis. Generally, earthquakes greater than magnitude (M) 6.5–7 can generate tsunamis if they occur beneath an ocean and if they result in predominantly vertical displacement. One of the greatest uncertainties in both deterministic and probabilistic hazard assessments of tsunamis is computing seafloor deformation for earthquakes of a given magnitude.

  4. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  5. Detailed source process of the 2007 Tocopilla earthquake.

    NASA Astrophysics Data System (ADS)

    Peyrat, S.; Madariaga, R.; Campos, J.; Asch, G.; Favreau, P.; Bernard, P.; Vilotte, J.

    2008-05-01

    We investigated the detail rupture process of the Tocopilla earthquake (Mw 7.7) of the 14 November 2007 and of the main aftershocks that occurred in the southern part of the North Chile seismic gap using strong motion data. The earthquake happen in the middle of the permanent broad band and strong motion network IPOC newly installed by GFZ and IPGP, and of a digital strong-motion network operated by the University of Chile. The Tocopilla earthquake is the last large thrust subduction earthquake that occurred since the major Iquique 1877 earthquake which produced a destructive tsunami. The Arequipa (2001) and Antofagasta (1995) earthquakes already ruptured the northern and southern parts of the gap, and the intraplate intermediate depth Tarapaca earthquake (2005) may have changed the tectonic loading of this part of the Peru-Chile subduction zone. For large earthquakes, the depth of the seismic rupture is bounded by the depth of the seismogenic zone. What controls the horizontal extent of the rupture for large earthquakes is less clear. Factors that influence the extent of the rupture include fault geometry, variations of material properties and stress heterogeneities inherited from the previous ruptures history. For subduction zones where structures are not well known, what may have stopped the rupture is not obvious. One crucial problem raised by the Tocopilla earthquake is to understand why this earthquake didn't extent further north, and at south, what is the role of the Mejillones peninsula that seems to act as a barrier. The focal mechanism was determined using teleseismic waveforms inversion and with a geodetic analysis (cf. Campos et al.; Bejarpi et al., in the same session). We studied the detailed source process using the strong motion data available. This earthquake ruptured the interplate seismic zone over more than 150 km and generated several large aftershocks, mainly located south of the rupture area. The strong-motion data show clearly two S

  6. Physical bases of the generation of short-term earthquake precursors: A complex model of ionization-induced geophysical processes in the lithosphere-atmosphere-ionosphere-magnetosphere system

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.

    2015-07-01

    This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.

  7. Strong ground motions generated by earthquakes on creeping faults

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  8. Simulation of Earthquake-Generated Sea-Surface Deformation

    NASA Astrophysics Data System (ADS)

    Vogl, Chris; Leveque, Randy

    2016-11-01

    Earthquake-generated tsunamis can carry with them a powerful, destructive force. One of the most well-known, recent examples is the tsunami generated by the Tohoku earthquake, which was responsible for the nuclear disaster in Fukushima. Tsunami simulation and forecasting, a necessary element of emergency procedure planning and execution, is typically done using the shallow-water equations. A typical initial condition is that using the Okada solution for a homogeneous, elastic half-space. This work focuses on simulating earthquake-generated sea-surface deformations that are more true to the physics of the materials involved. In particular, a water layer is added on top of the half-space that models the seabed. Sea-surface deformations are then simulated using the Clawpack hyperbolic PDE package. Results from considering the water layer both as linearly elastic and as "nearly incompressible" are compared to that of the Okada solution.

  9. Role of H2O in Generating Subduction Zone Earthquakes

    NASA Astrophysics Data System (ADS)

    Hasegawa, A.

    2017-03-01

    A dense nationwide seismic network and high seismic activity in Japan have provided a large volume of high-quality data, enabling high-resolution imaging of the seismic structures defining the Japanese subduction zones. Here, the role of H2O in generating earthquakes in subduction zones is discussed based mainly on recent seismic studies in Japan using these high-quality data. Locations of intermediate-depth intraslab earthquakes and seismic velocity and attenuation structures within the subducted slab provide evidence that strongly supports intermediate-depth intraslab earthquakes, although the details leading to the earthquake rupture are still poorly understood. Coseismic rotations of the principal stress axes observed after great megathrust earthquakes demonstrate that the plate interface is very weak, which is probably caused by overpressured fluids. Detailed tomographic imaging of the seismic velocity structure in and around plate boundary zones suggests that interplate coupling is affected by local fluid overpressure. Seismic tomography studies also show the presence of inclined sheet-like seismic low-velocity, high-attenuation zones in the mantle wedge. These may correspond to the upwelling flow portion of subduction-induced secondary convection in the mantle wedge. The upwelling flows reach the arc Moho directly beneath the volcanic areas, suggesting a direct relationship. H2O originally liberated from the subducted slab is transported by this upwelling flow to the arc crust. The H2O that reaches the crust is overpressured above hydrostatic values, weakening the surrounding crustal rocks and decreasing the shear strength of faults, thereby inducing shallow inland earthquakes. These observations suggest that H2O expelled from the subducting slab plays an important role in generating subduction zone earthquakes both within the subduction zone itself and within the magmatic arc occupying its hanging wall.

  10. Sediment gravity flows triggered by remotely generated earthquake waves

    NASA Astrophysics Data System (ADS)

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan L.; Salmi, Marie S.

    2017-06-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011-2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  11. Sediment gravity flows triggered by remotely generated earthquake waves

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan; Salmi, Marie

    2017-01-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011–2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  12. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  13. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  14. Monitoring the Earthquake source process in North America

    Herrmann, Robert B.; Benz, H.; Ammon, C.J.

    2011-01-01

    With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.

  15. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  16. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  17. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  18. An interdisciplinary approach to study Pre-Earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  19. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  20. Frequency-Dependent Rupture Processes for the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Miyake, H.

    2012-12-01

    The 2011 Tohoku earthquake is characterized by frequency-dependent rupture process [e.g., Ide et al., 2011; Wang and Mori, 2011; Yao et al., 2011]. For understanding rupture dynamics of this earthquake, it is extremely important to investigate wave-based source inversions for various frequency bands. The above frequency-dependent characteristics have been derived from teleseismic analyses. This study challenges to infer frequency-dependent rupture processes from strong motion waveforms of K-NET and KiK-net stations. The observations suggested three or more S-wave phases, and ground velocities at several near-source stations showed different arrivals of their long- and short-period components. We performed complex source spectral inversions with frequency-dependent phase weighting developed by Miyake et al. [2002]. The technique idealizes both the coherent and stochastic summation of waveforms using empirical Green's functions. Due to the limitation of signal-to-noise ratio of the empirical Green's functions, the analyzed frequency bands were set within 0.05-10 Hz. We assumed a fault plane with 480 km in length by 180 km in width with a single time window for rupture following Koketsu et al. [2011] and Asano and Iwata [2012]. The inversion revealed source ruptures expanding from the hypocenter, and generated sharp slip-velocity intensities at the down-dip edge. In addition to test the effects of empirical/hybrid Green's functions and with/without rupture front constraints on the inverted solutions, we will discuss distributions of slip-velocity intensity and a progression of wave generation with increasing frequency.

  1. Chapter F. The Loma Prieta, California, Earthquake of October 17, 1989 - Tectonic Processes and Models

    Simpson, Robert W.

    1994-01-01

    If there is a single theme that unifies the diverse papers in this chapter, it is the attempt to understand the role of the Loma Prieta earthquake in the context of the earthquake 'machine' in northern California: as the latest event in a long history of shocks in the San Francisco Bay region, as an incremental contributor to the regional deformation pattern, and as a possible harbinger of future large earthquakes. One of the surprises generated by the earthquake was the rather large amount of uplift that occurred as a result of the reverse component of slip on the southwest-dipping fault plane. Preearthquake conventional wisdom had been that large earthquakes in the region would probably be caused by horizontal, right-lateral, strike-slip motion on vertical fault planes. In retrospect, the high topography of the Santa Cruz Mountains and the elevated marine terraces along the coast should have provided some clues. With the observed ocean retreat and the obvious uplift of the coast near Santa Cruz that accompanied the earthquake, Mother Nature was finally caught in the act. Several investigators quickly saw the connection between the earthquake uplift and the long-term evolution of the Santa Cruz Mountains and realized that important insights were to be gained by attempting to quantify the process of crustal deformation in terms of Loma Prieta-type increments of northward transport and fault-normal shortening.

  2. Frictional heating processes during laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Aubry, J.; Passelegue, F. X.; Deldicque, D.; Lahfid, A.; Girault, F.; Pinquier, Y.; Escartin, J.; Schubnel, A.

    2017-12-01

    Frictional heating during seismic slip plays a crucial role in the dynamic of earthquakes because it controls fault weakening. This study proposes (i) to image frictional heating combining an in-situ carbon thermometer and Raman microspectrometric mapping, (ii) to combine these observations with fault surface roughness and heat production, (iii) to estimate the mechanical energy dissipated during laboratory earthquakes. Laboratory earthquakes were performed in a triaxial oil loading press, at 45, 90 and 180 MPa of confining pressure by using saw-cut samples of Westerly granite. Initial topography of the fault surface was +/- 30 microns. We use a carbon layer as a local temperature tracer on the fault plane and a type K thermocouple to measure temperature approximately 6mm away from the fault surface. The thermocouple measures the bulk temperature of the fault plane while the in-situ carbon thermometer images the temperature production heterogeneity at the micro-scale. Raman microspectrometry on amorphous carbon patch allowed mapping the temperature heterogeneities on the fault surface after sliding overlaid over a few micrometers to the final fault roughness. The maximum temperature achieved during laboratory earthquakes remains high for all experiments but generally increases with the confining pressure. In addition, the melted surface of fault during seismic slip increases drastically with confining pressure. While melting is systematically observed, the strength drop increases with confining pressure. These results suggest that the dynamic friction coefficient is a function of the area of the fault melted during stick-slip. Using the thermocouple, we inverted the heat dissipated during each event. We show that for rough faults under low confining pressure, less than 20% of the total mechanical work is dissipated into heat. The ratio of frictional heating vs. total mechanical work decreases with cumulated slip (i.e. number of events), and decreases with

  3. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  4. Infrasonic waves in the ionosphere generated by a weak earthquake

    NASA Astrophysics Data System (ADS)

    Krasnov, V. M.; Drobzheva, Ya. V.; Chum, J.

    2011-08-01

    A computer code has been developed to simulate the generation of infrasonic waves (frequencies considered ≤80 Hz) by a weak earthquake (magnitude ˜3.6), their propagation through the atmosphere and their effects in the ionosphere. We provide estimates of the perturbations in the ionosphere at the height (˜160 km) where waves at the sounding frequency (3.59 MHz) of a continuous Doppler radar reflect. We have found that the pressure perturbation is 5.79×10-7 Pa (0.26% of the ambient value), the temperature perturbation is 0.088 K (0.015% of the ambient value) and the electron density perturbation is 2×108 m-3 (0.12% of the ambient value). The characteristic perturbation is found to be a bipolar pulse lasting ˜25 s, and the maximum Doppler shift is found to be ˜0.08 Hz, which is too small to be detected by the Doppler radar at the time of the earthquake.

  5. Seismo-Acoustic Generation by Earthquakes and Explosions and Near-Regional Propagation

    DTIC Science & Technology

    2009-09-30

    earthquakes generate infrasound . Three infrasonic arrays in Utah (BGU, EPU, and NOQ), one in Nevada (NVIAR), and one in Wyoming (PDIAR) recorded...Katz, and C. Hayward (2009b). The F-detector Revisited: An Improved Strategy for Signal Detection at Seismic and Infrasound Arrays , Bull. Seism. Soc...sources. RESEARCH ACCOMPLISHED Infrasound Observations of the Wells Earthquake Most studies documenting earthquake - generated infrasound are based

  6. Low magnitude earthquakes generating significant subsidence: the Lunigiana case study

    NASA Astrophysics Data System (ADS)

    Samsonov, S. V.; Polcari, M.; Melini, D.; Cannelli, V.; Moro, M.; Bignami, C.; Saroli, M.; Vannoli, P.; Stramondo, S.

    2013-12-01

    We applied the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique to investigate and measure surface displacements due to the ML 5.2, June 21, 2013, earthquake occurred in the Apuan Alps (NW Italy) at a depth of about 5 km. The Centroid Moment Tensor (CMT) solution from INGV indicates an almost pure normal fault mechanism. Two differential interferograms showing the coseismic displacement were generated using X- band and C-band data respectively. The X-Band interferogram was obtained from a Cosmo-SkyMed ascending pair (azimuth -7.9° and incidence angle 40°) with a time interval of one day (June 21 - June 22) and 139 m spatial baseline, covering an area of about 40x40 km around the epicenter. The topographic phase component was removed using the 90 m SRTM DEM. The C-Band interferferogram was computed from two RADARSAT-2 Standard-3 (S3) images, characterized by 24 days temporal and 69 m spatial baselines, acquired on June 18 and July 12, 2013 on ascending orbit (azimuth -10.8°) with an incidence angle of 34° and covering 100x100 km area around the epicenter. The topographic phase component was removed using 30 m ASTER DEM. Adaptive filtering, phase unwrapping with Minimum Cost Flow (MCF) algorithm and orbital refinement were also applied to both interferograms. We modeled the observed SAR deformation fields using the Okada analytical formulation within a nonlinear inversion scheme, and found them to be consistent with a fault plane dipping towards NW at an angle of about 45°. In spite of the small magnitude, this earthquake produces a surface subsidence of about 1.5 cm in the Line-Of-Sight (LOS) direction, corresponding to about 3 cm along the vertical axis, that can be observed in both interferograms and appears consistent with the normal fault mechanisms.

  7. Discovering Coseismic Traveling Ionospheric Disturbances Generated by the 2016 Kaikoura Earthquake

    NASA Astrophysics Data System (ADS)

    Li, J. D.; Rude, C. M.; Gowanlock, M.; Pankratius, V.

    2017-12-01

    Geophysical events and hazards, such as earthquakes, tsunamis, and volcanoes, have been shown to generate traveling ionospheric disturbances (TIDs). These disturbances can be measured by means of Total Electron Content fluctuations obtained from a network of multifrequency GPS receivers in the MIT Haystack Observatory Madrigal database. Analyzing the response of the ionosphere to such hazards enhances our understanding of natural phenomena and augments our large-scale monitoring capabilities in conjunction with other ground-based sensors. However, it is currently challenging for human investigators to spot and characterize such signatures, or whether a geophysical event has actually occurred, because the ionosphere can be noisy with multiple simultaneous phenomena taking place at the same time. This work therefore explores a systematic pipeline for the ex-post discovery and characterization of TIDs. Our technique starts by geolocating the event and gathering the corresponding data, then checks for potentially conflicting TID sources, and processes the raw total electron content data to generate differential measurements. A Kolmogorov-Smirnov test is applied to evaluate the statistical significance of detected deviations in the differential measurements. We present results from our successful application of this pipeline to the 2016 7.8 Mw Kaikoura earthquake occurring in New Zealand on November 13th. We detect a coseismic TID occurring 8 minutes after the earthquake and propagating towards the equator at 1050 m/s, with a 0.22 peak-to-peak TECu amplitude. Furthermore, the observed waveform exhibits more complex behavior than the expected N-wave for a coseismic TID, which potentially results from the complex multi-fault structure of the earthquake. We acknowledge support from NSF ACI1442997 (PI Pankratius), NASA AISTNNX15AG84G (PI Pankratius), and NSF AGS-1343967 (PI Pankratius), and NSF AGS-1242204 (PI Erickson).

  8. Ionospheric Method of Detecting Tsunami-Generating Earthquakes.

    ERIC Educational Resources Information Center

    Najita, Kazutoshi; Yuen, Paul C.

    1978-01-01

    Reviews the earthquake phenomenon and its possible relation to ionospheric disturbances. Discusses the basic physical principles involved and the methods upon which instrumentation is being developed for possible use in a tsunami disaster warning system. (GA)

  9. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  10. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  11. The rupture process of the Manjil, Iran earthquake of 20 june 1990 and implications for intraplate strike-slip earthquakes

    Choy, G.L.; Zednik, J.

    1997-01-01

    In terms of seismically radiated energy or moment release, the earthquake of 20 January 1990 in the Manjil Basin-Alborz Mountain region of Iran is the second largest strike-slip earthquake to have occurred in an intracontinental setting in the past decade. It caused enormous loss of life and the virtual destruction of several cities. Despite a very large meizoseismal area, the identification of the causative faults has been hampered by the lack of reliable earthquake locations and conflicting field reports of surface displacement. Using broadband data from global networks of digitally recording seismographs, we analyse broadband seismic waveforms to derive characteristics of the rupture process. Complexities in waveforms generated by the earthquake indicate that the main shock consisted of a tiny precursory subevent followed in the next 20 seconds by a series of four major subevents with depths ranging from 10 to 15 km. The focal mechanisms of the major subevents, which are predominantly strike-slip, have a common nodal plane striking about 285??-295??. Based on the coincidence of this strike with the dominant tectonic fabric of the region we presume that the EW striking planes are the fault planes. The first major subevent nucleated slightly south of the initial precursor. The second subevent occurred northwest of the initial precursor. The last two subevents moved progressively southeastward of the first subevent in a direction collinear with the predominant strike of the fault planes. The offsets in the relative locations and the temporal delays of the rupture subevents indicate heterogeneous distribution of fracture strength and the involvement of multiple faults. The spatial distribution of teleseismic aftershocks, which at first appears uncorrelated with meizoseismal contours, can be decomposed into stages. The initial activity, being within and on the periphery of the rupture zone, correlates in shape and length with meizoseismal lines. In the second stage

  12. Characterization of intermittency in renewal processes: Application to earthquakes

    SciT

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  13. Relating triggering processes in lab experiments with earthquakes.

    NASA Astrophysics Data System (ADS)

    Baro Urbea, J.; Davidsen, J.; Kwiatek, G.; Charalampidou, E. M.; Goebel, T.; Stanchits, S. A.; Vives, E.; Dresen, G.

    2016-12-01

    Statistical relations such as Gutenberg-Richter's, Omori-Utsu's and the productivity of aftershocks were first observed in seismology, but are also common to other physical phenomena exhibiting avalanche dynamics such as solar flares, rock fracture, structural phase transitions and even stock market transactions. All these examples exhibit spatio-temporal correlations that can be explained as triggering processes: Instead of being activated as a response to external driving or fluctuations, some events are consequence of previous activity. Although different plausible explanations have been suggested in each system, the ubiquity of such statistical laws remains unknown. However, the case of rock fracture may exhibit a physical connection with seismology. It has been suggested that some features of seismology have a microscopic origin and are reproducible over a vast range of scales. This hypothesis has motivated mechanical experiments to generate artificial catalogues of earthquakes at a laboratory scale -so called labquakes- and under controlled conditions. Microscopic fractures in lab tests release elastic waves that are recorded as ultrasonic (kHz-MHz) acoustic emission (AE) events by means of piezoelectric transducers. Here, we analyse the statistics of labquakes recorded during the failure of small samples of natural rocks and artificial porous materials under different controlled compression regimes. Temporal and spatio-temporal correlations are identified in certain cases. Specifically, we distinguish between the background and triggered events, revealing some differences in the statistical properties. We fit the data to statistical models of seismicity. As a particular case, we explore the branching process approach simplified in the Epidemic Type Aftershock Sequence (ETAS) model. We evaluate the empirical spatio-temporal kernel of the model and investigate the physical origins of triggering. Our analysis of the focal mechanisms implies that the occurrence

  14. Generative Processes: Thick Drawing

    ERIC Educational Resources Information Center

    Wallick, Karl

    2012-01-01

    This article presents techniques and theories of generative drawing as a means for developing complex content in architecture design studios. Appending the word "generative" to drawing adds specificity to the most common representation tool and clarifies that such drawings are not singularly about communication or documentation but are…

  15. Dilational processes accompanying earthquakes in the Long Valley Caldera

    Dreger, Douglas S.; Tkalcic, Hrvoje; Johnston, M.

    2000-01-01

    Regional distance seismic moment tensor determinations and broadband waveforms of moment magnitude 4.6 to 4.9 earthquakes from a November 1997 Long Valley Caldera swarm, during an inflation episode, display evidence of anomalous seismic radiation characterized by non-double couple (NDC) moment tensors with significant volumetric components. Observed coseismic dilation suggests that hydrothermal or magmatic processes are directly triggering some of the seismicity in the region. Similarity in the NDC solutions implies a common source process, and the anomalous events may have been triggered by net fault-normal stress reduction due to high-pressure fluid injection or pressurization of fluid-saturated faults due to magmatic heating.

  16. Driving Processes of Earthquake Swarms: Evidence from High Resolution Seismicity

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.; Shelly, D. R.; Hill, D. P.; Hardebeck, J.; Hsieh, P. A.

    2017-12-01

    Earthquake swarms are transient increases in seismicity deviating from a typical mainshock-aftershock pattern. Swarms are most prevalent in volcanic and hydrothermal areas, yet also occur in other environments, such as extensional fault stepovers. Swarms provide a valuable opportunity to investigate source zone physics, including the causes of their swarm-like behavior. To gain insight into this behavior, we have used waveform-based methods to greatly enhance standard seismic catalogs. Depending on the application, we detect and precisely relocate 2-10x as many events as included in the initial catalog. Recently, we have added characterization of focal mechanisms (applied to a 2014 swarm in Long Valley Caldera, California), addressing a common shortcoming in microseismicity analyses (Shelly et al., JGR, 2016). In analysis of multiple swarms (both within and outside volcanic areas), several features stand out, including: (1) dramatic expansion of the active source region with time, (2) tendency for events to occur on the immediate fringe of prior activity, (3) overall upward migration, and (4) complex faulting structure. Some swarms also show an apparent mismatch between seismicity orientations (as defined by patterns in hypocentral locations) and slip orientations (as inferred from focal mechanisms). These features are largely distinct from those observed in mainshock-aftershock sequences. In combination, these swarm behaviors point to an important role for fluid pressure diffusion. Swarms may in fact be generated by a cascade of fluid pressure diffusion and stress transfer: in cases where faults are critically stressed, an increase in fluid pressure will trigger faulting. Faulting will in turn dramatically increase permeability in the faulted area, allowing rapid equilibration of fluid pressure to the fringe of the rupture zone. This process may perpetuate until fluid pressure perturbations drop and/or stresses become further from failure, such that any

  17. Waveform Generator Signal Processing Software

    DOT National Transportation Integrated Search

    1988-09-01

    This report describes the software that was developed to process test waveforms that were recorded by crash test data acquisition systems. The test waveforms are generated by an electronic waveform generator developed by MGA Research Corporation unde...

  18. Strong Ground Motion Generation during the 2011 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2011-12-01

    Strong ground motions during the 2011 Tohoku-Oki earthquake (Mw9.0) were densely observed by the strong motion observation networks all over Japan. Seeing the acceleration and velocity waveforms observed at strong stations in northeast Japan along the source region, those ground motions are characterized by plural wave packets with duration of about twenty seconds. Particularly, two wave packets separated by about fifty seconds could be found on the records in the northern part of the damaged area, whereas only one significant wave packets could be recognized on the records in the southern part of the damaged area. The record section shows four isolated wave packets propagating from different locations to north and south, and it gives us a hint of the strong motion generation process on the source fault which is related to the heterogeneous rupture process in the scale of tens of kilometers. In order to solve it, we assume that each isolated wave packet is contributed by the corresponding strong motion generation area (SMGA). It is a source patch whose slip velocity is larger than off the area (Miyake et al., 2003). That is, the source model of the 2011 Tohoku-Oki earthquake consists of four SMGAs. The SMGA source model has succeeded in reproducing broadband strong ground motions for past subduction-zone events (e.g., Suzuki and Iwata, 2007). The target frequency range is set to be 0.1-10 Hz in this study as this range is significantly related to seismic damage generation to general man-made structures. First, we identified the rupture starting points of each SMGA by picking up the onset of individual packets. The source fault plane is set following the GCMT solution. The first two SMGAs were located approximately 70 km and 30 km west of the hypocenter. The third and forth SMGAs were located approximately 160 km and 230 km southwest of the hypocenter. Then, the model parameters (size, rise time, stress drop, rupture velocity, rupture propagation pattern) of these

  19. Insights in Low Frequency Earthquake Source Processes from Observations of Their Size-Duration Scaling

    NASA Astrophysics Data System (ADS)

    Farge, G.; Shapiro, N.; Frank, W.; Mercury, N.; Vilotte, J. P.

    2017-12-01

    Low frequency earthquakes (LFE) are detected in association with volcanic and tectonic tremor signals as impulsive, repeated, low frequency (1-5 Hz) events originating from localized sources. While the mechanism causing this depletion of the high frequency content of their signal is still unknown, this feature may indicate that the source processes at the origin of LFE are different from those for regular earthquakes. Tectonic LFE are often associated with slip instabilities in the brittle-ductile transition zones of active faults and volcanic LFE with fluid transport in magmatic and hydrothermal systems. Key constraints on the LFE-generating physical mechanisms can be obtained by establishing scaling laws between their sizes and durations. We apply a simple spectral analysis method to the S-waveforms of each LFE to retrieve its seismic moment and corner frequency. The former characterizes the earthquake's size while the latter is inversely proportional to its duration. First, we analyze a selection of tectonic LFE from the Mexican "Sweet Spot" (Guerrero, Mexico). We find characteristic values of M ˜ 1013 N.m (Mw ˜ 2.6) and fc ˜ 2 Hz. The moment-corner frequency distribution compared to values reported in previous studies in tectonic contexts is consistent with the scaling law suggested by Bostock et al. (2015): fc ˜ M-1/10 . We then apply the same source- parameters determination method to deep volcanic LFE detected in the Klyuchevskoy volcanic group in Kamtchatka, Russia. While the seismic moments for these earthquakes are slightly smaller, they still approximately follow the fc ˜ M-1/10 scaling. This size-duration scaling observed for LFE is very different from the one established for regular earthquakes (fc ˜ M-1/3) and from the scaling more recently suggested by Ide et al. (2007) for the broad class of "slow earthquakes". The scaling observed for LFE suggests that they are generated by sources of nearly constant size with strongly varying intensities

  20. Large earthquake rupture process variations on the Middle America megathrust

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  1. Attenuation of Slab determined from T-wave generation by deep earthquakes

    NASA Astrophysics Data System (ADS)

    Huang, J.; Ni, S.

    2006-05-01

    T-wave are seismically generated acoustic waves that propagate over great distance in the ocean sound channel (SOFAR). Because of the high attenuation in both the upper mantle and the ocean crust, T wave is rarely observed for earthquakes deeper than 80 km. However some deep earthquakes deeper than 80km indeed generate apparent T-waves if the subducted slab is continuous Okal et al. (1997) . We studied the deep earthquakes in the Fiji/Tonga region, where the subducted lithosphere is old and thus with small attenuation. After analyzing 33 earthquakes with the depth from 10 Km to 650 Km in Fiji/Tonga, we observed and modeled obvious T-phases from these earthquakes observed at station RAR. We used the T-wave generated by deep earthquakes to compute the quality factor of the Fiji/Tonga slab. The method used in this study is followed the equation (1) by [Groot-Hedlin et al,2001][1]. A=A0/(1+(Ω0/Ω)2)×exp(-LΩ/Qv)×Ωn where the A is the amplitude computed by the practicable data, amplitude depending on the earthquakes, and A0 is the inherent frequency related with the earthquake's half duration, L is the length of ray path that P wave or S travel in the slab, and the V is the velocity of P-wave. In this study, we fix the n=2, by assuming the T- wave scattering points in the Fiji/Tonga island arc having the same attribution as the continental shelf. After some computing and careful analysis, we determined the quality factor of the Fiji/Tonga to be around 1000, Such result is consistent with results from the traditional P,S-wave data[Roth & Wiens,1999][2] . Okal et al. (1997) pointed out that the slab in the part of central South America was also a continuous slab, by modeling apparent T-waves from the great 1994 Bolivian deep earthquake in relation to channeling of S wave energy propagating upward through the slab[3]. [1]Catherine D. de Groot-Hedlin, John A. Orcutt, excitation of T-phases by seafloor scattering, J. Acoust. Soc, 109,1944-1954,2001. [2]Erich G.Roth and

  2. Sibling earthquakes generated within a persistent rupture barrier on the Sunda megathrust under Simeulue Island

    NASA Astrophysics Data System (ADS)

    Morgan, Paul M.; Feng, Lujia; Meltzner, Aron J.; Lindsey, Eric O.; Tsang, Louisa L. H.; Hill, Emma M.

    2017-03-01

    A section of the Sunda megathrust underneath Simeulue is known to persistently halt rupture propagation of great earthquakes, including those in 2004 (Mw 9.2) and 2005 (Mw 8.6). Yet the same section generated large earthquakes in 2002 (Mw 7.3) and 2008 (Mw 7.4). To date, few studies have investigated the 2002 and 2008 events, and none have satisfactorily located or explained them. Using near-field InSAR, GPS, and coral geodetic data, we find that the slip distributions of the two events are not identical but do show a close resemblance and largely overlap. We thus consider these earthquakes "siblings" that were generated by an anomalous "parent" feature of the megathrust. We suggest that this parent feature is a locked asperity surrounded by the otherwise partially creeping Simeulue section, perhaps structurally controlled by a broad morphological high on the megathrust.

  3. Duration of Tsunami Generation Longer than Duration of Seismic Wave Generation in the 2011 Mw 9.0 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.

    2013-12-01

    We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al

  4. Nonlinear ionospheric responses to large-amplitude infrasonic-acoustic waves generated by undersea earthquakes

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-02-01

    Numerical models of ionospheric coupling with the neutral atmosphere are used to investigate perturbations of plasma density, vertically integrated total electron content (TEC), neutral velocity, and neutral temperature associated with large-amplitude acoustic waves generated by the initial ocean surface displacements from strong undersea earthquakes. A simplified source model for the 2011 Tohoku earthquake is constructed from estimates of initial ocean surface responses to approximate the vertical motions over realistic spatial and temporal scales. Resulting TEC perturbations from modeling case studies appear consistent with observational data, reproducing pronounced TEC depletions which are shown to be a consequence of the impacts of nonlinear, dissipating acoustic waves. Thermospheric acoustic compressional velocities are ˜±250-300 m/s, superposed with downward flows of similar amplitudes, and temperature perturbations are ˜300 K, while the dominant wave periodicity in the thermosphere is ˜3-4 min. Results capture acoustic wave processes including reflection, onset of resonance, and nonlinear steepening and dissipation—ultimately leading to the formation of ionospheric TEC depletions "holes"—that are consistent with reported observations. Three additional simulations illustrate the dependence of atmospheric acoustic wave and subsequent ionospheric responses on the surface displacement amplitude, which is varied from the Tohoku case study by factors of 1/100, 1/10, and 2. Collectively, results suggest that TEC depletions may only accompany very-large amplitude thermospheric acoustic waves necessary to induce a nonlinear response, here with saturated compressional velocities ˜200-250 m/s generated by sea surface displacements exceeding ˜1 m occurring over a 3 min time period.

  5. Effect of Sediments on Rupture Dynamics of Shallow Subduction Zone Earthquakes and Tsunami Generation

    NASA Astrophysics Data System (ADS)

    Ma, S.

    2011-12-01

    Low-velocity fault zones have long been recognized for crustal earthquakes by using fault-zone trapped waves and geodetic observations on land. However, the most pronounced low-velocity fault zones are probably in the subduction zones where sediments on the seafloor are being continuously subducted. In this study I focus on shallow subduction zone earthquakes; these earthquakes pose a serious threat to human society in their ability in generating large tsunamis. Numerous observations indicate that these earthquakes have unusually long rupture durations, low rupture velocities, and/or small stress drops near the trench. However, the underlying physics is unclear. I will use dynamic rupture simulations with a finite-element method to investigate the dynamic stress evolution on faults induced by both sediments and free surface, and its relations with rupture velocity and slip. I will also explore the effect of off-fault yielding of sediments on the rupture characteristics and seafloor deformation. As shown in Ma and Beroza (2008), the more compliant hanging wall combined with free surface greatly increases the strength drop and slip near the trench. Sediments in the subduction zone likely have a significant role in the rupture dynamics of shallow subduction zone earthquakes and tsunami generation.

  6. Numerical simulation of faulting in the Sunda Trench shows that seamounts may generate megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Jiao, L.; Chan, C. H.; Tapponnier, P.

    2017-12-01

    The role of seamounts in generating earthquakes has been debated, with some studies suggesting that seamounts could be truncated to generate megathrust events, while other studies indicate that the maximum size of megathrust earthquakes could be reduced as subducting seamounts could lead to segmentation. The debate is highly relevant for the seamounts discovered along the Mentawai patch of the Sunda Trench, where previous studies have suggested that a megathrust earthquake will likely occur within decades. In order to model the dynamic behavior of the Mentawai patch, we simulated forearc faulting caused by seamount subducting using the Discrete Element Method. Our models show that rupture behavior in the subduction system is dominated by stiffness of the overriding plate. When stiffness is low, a seamount can be a barrier to rupture propagation, resulting in several smaller (M≤8.0) events. If, however, stiffness is high, a seamount can cause a megathrust earthquake (M8 class). In addition, we show that a splay fault in the subduction environment could only develop when a seamount is present, and a larger offset along a splay fault is expected when stiffness of the overriding plate is higher. Our dynamic models are not only consistent with previous findings from seismic profiles and earthquake activities, but the models also better constrain the rupture behavior of the Mentawai patch, thus contributing to subsequent seismic hazard assessment.

  7. Application Processing | Distributed Generation Interconnection

    delivering swift customer service. The rapid rise of distributed generation (DG) PV interconnection speed processing, reduce paperwork, and improve customer service. Webinars and publications are

  8. Differences in tsunami generation between the December 26, 2004 and March 28, 2005 Sumatra earthquakes

    Geist, E.L.; Bilek, S.L.; Arcas, D.; Titov, V.V.

    2006-01-01

    Source parameters affecting tsunami generation and propagation for the Mw > 9.0 December 26, 2004 and the Mw = 8.6 March 28, 2005 earthquakes are examined to explain the dramatic difference in tsunami observations. We evaluate both scalar measures (seismic moment, maximum slip, potential energy) and finite-source repre-sentations (distributed slip and far-field beaming from finite source dimensions) of tsunami generation potential. There exists significant variability in local tsunami runup with respect to the most readily available measure, seismic moment. The local tsunami intensity for the December 2004 earthquake is similar to other tsunamigenic earthquakes of comparable magnitude. In contrast, the March 2005 local tsunami was deficient relative to its earthquake magnitude. Tsunami potential energy calculations more accurately reflect the difference in tsunami severity, although these calculations are dependent on knowledge of the slip distribution and therefore difficult to implement in a real-time system. A significant factor affecting tsunami generation unaccounted for in these scalar measures is the location of regions of seafloor displacement relative to the overlying water depth. The deficiency of the March 2005 tsunami seems to be related to concentration of slip in the down-dip part of the rupture zone and the fact that a substantial portion of the vertical displacement field occurred in shallow water or on land. The comparison of the December 2004 and March 2005 Sumatra earthquakes presented in this study is analogous to previous studies comparing the 1952 and 2003 Tokachi-Oki earthquakes and tsunamis, in terms of the effect slip distribution has on local tsunamis. Results from these studies indicate the difficulty in rapidly assessing local tsunami runup from magnitude and epicentral location information alone.

  9. Identification of earthquakes that generate tsunamis in Java and Nusa Tenggara using rupture duration analysis

    SciT

    Pribadi, S., E-mail: sugengpribadimsc@gmail.com; Puspito, N. T.; Yudistira, T.

    Java and Nusa Tenggara are the tectonically active of Sunda arc. This study discuss the rupture duration as a manifestation of the power of earthquake-generated tsunami. We use the teleseismic (30° - 90°) body waves with high-frequency energy Seismometer is from IRIS network as amount 206 broadband units. We applied the Butterworth high bandpass (1 - 2 Hz) filtered. The arrival and travel times started from wave phase of P - PP which based on Jeffrey Bullens table with TauP program. The results are that the June 2, 1994 Banyuwangi and the July 17, 2006 Pangandaran earthquakes identified as tsunamimore » earthquakes with long rupture duration (To > 100 second), medium magnitude (7.6 < Mw < 7.9) and located near the trench. The others are 4 tsunamigenic earthquakes and 3 inland earthquakes with short rupture duration start from To > 50 second which depend on its magnitude. Those events are located far from the trench.« less

  10. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  11. Frictional melt generated by the 2008 Mw 7.9 Wenchuan earthquake and its faulting mechanisms

    NASA Astrophysics Data System (ADS)

    Wang, H.; Li, H.; Si, J.; Sun, Z.; Zhang, L.; He, X.

    2017-12-01

    Fault-related pseudotachylytes are considered as fossil earthquakes, conveying significant information that provide improved insight into fault behaviors and their mechanical properties. The WFSD project was carried out right after the 2008 Wenchuan earthquake, detailed research was conducted in the drilling cores. 2 mm rigid black layer with fresh slickenlines was observed at 732.6 m in WFSD-1 cores drilled at the southern Yingxiu-Beichuan fault (YBF). Evidence of optical microscopy, FESEM and FIB-TEM show it's frictional melt (pseudotachylyte). In the northern part of YBF, 4 mm fresh melt was found at 1084 m with similar structures in WFSD-4S cores. The melts contain numerous microcracks. Considering that (1) the highly unstable property of the frictional melt (easily be altered or devitrified) under geological conditions; (2) the unfilled microcracks; (3) fresh slickenlines and (4) recent large earthquake in this area, we believe that 2-4 mm melt was produced by the 2008 Wenchuan earthquake. This is the first report of fresh pseudotachylyte with slickenlines in natural fault that generated by modern earthquake. Geochemical analyses show that fault rocks at 732.6 m are enriched in CaO, Fe2O3, FeO, H2O+ and LOI, whereas depleted in SiO2. XRF results show that Ca and Fe are enriched obviously in the 2.5 cm fine-grained fault rocks and Ba enriched in the slip surface. The melt has a higher magnetic susceptibility value, which may due to neoformed magnetite and metallic iron formed in fault frictional melt. Frictional melt visible in both southern and northern part of YBF reveals that frictional melt lubrication played a major role in the Wenchuan earthquake. Instead of vesicles and microlites, numerous randomly oriented microcracks in the melt, exhibiting a quenching texture. The quenching texture suggests the frictional melt was generated under rapid heat-dissipation condition, implying vigorous fluid circulation during the earthquake. We surmise that during

  12. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    ERIC Educational Resources Information Center

    Haddad, David Elias

    2014-01-01

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that…

  13. The Physics of Earthquakes: In the Quest for a Unified Theory (or Model) That Quantitatively Describes the Entire Process of an Earthquake Rupture, From its Nucleation to the Dynamic Regime and to its Arrest

    NASA Astrophysics Data System (ADS)

    Ohnaka, M.

    2004-12-01

    For the past four decades, great progress has been made in understanding earthquake source processes. In particular, recent progress in the field of the physics of earthquakes has contributed substantially to unraveling the earthquake generation process in quantitative terms. Yet, a fundamental problem remains unresolved in this field. The constitutive law that governs the behavior of earthquake ruptures is the basis of earthquake physics, and the governing law plays a fundamental role in accounting for the entire process of an earthquake rupture, from its nucleation to the dynamic propagation to its arrest, quantitatively in a unified and consistent manner. Therefore, without establishing the rational constitutive law, the physics of earthquakes cannot be a quantitative science in a true sense, and hence it is urgent to establish the rational constitutive law. However, it has been controversial over the past two decades, and it is still controversial, what the constitutive law for earthquake ruptures ought to be, and how it should be formulated. To resolve the controversy is a necessary step towards a more complete, unified theory of earthquake physics, and now the time is ripe to do so. Because of its fundamental importance, we have to discuss thoroughly and rigorously what the constitutive law ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid evidence. There are prerequisites for the constitutive formulation. The brittle, seismogenic layer and individual faults therein are characterized by inhomogeneity, and fault inhomogeneity has profound implications for earthquake ruptures. In addition, rupture phenomena including earthquakes are inherently scale dependent; indeed, some of the physical quantities inherent in rupture exhibit scale dependence. To treat scale-dependent physical quantities inherent in the rupture over a broad scale range quantitatively in a unified and consistent manner, it is critical to

  14. Learning as a Generative Process

    ERIC Educational Resources Information Center

    Wittrock, M. C.

    2010-01-01

    A cognitive model of human learning with understanding is introduced. Empirical research supporting the model, which is called the generative model, is summarized. The model is used to suggest a way to integrate some of the research in cognitive development, human learning, human abilities, information processing, and aptitude-treatment…

  15. The Quanzhou large earthquake: environment impact and deep process

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Gao*, R.; Ye, Z.; Wang, C.

    2017-12-01

    The Quanzhou earthquake is the largest earthquake in China's southeast coast in history. The ancient city of Quanzhou and its adjacent areas suffered serious damage. Analysis of the impact of Quanzhou earthquake on human activities, ecological environment and social development will provide an example for the research on environment and human interaction.According to historical records, on the night of December 29, 1604, a Ms 8.0 earthquake occurred in the sea area at the east of Quanzhou (25.0°N, 119.5°E) with a focal depth of 25 kilometers. It affected to a maximum distance of 220 kilometers from the epicenter and caused serious damage. Quanzhou, which has been known as one of the world's largest trade ports during Song and Yuan periods was heavily destroyed by this earthquake. The destruction of the ancient city was very serious and widespread. The city wall collapsed in Putian, Nanan, Tongan and other places. The East and West Towers of Kaiyuan Temple, which are famous with magnificent architecture in history, were seriously destroyed.Therefore, an enormous earthquake can exert devastating effects on human activities and social development in the history. It is estimated that a more than Ms. 5.0 earthquake in the economically developed coastal areas in China can directly cause economic losses for more than one hundred million yuan. This devastating large earthquake that severely destroyed the Quanzhou city was triggered under a tectonic-extensional circumstance. In this coastal area of the Fujian Province, the crust gradually thins eastward from inland to coast (less than 29 km thick crust beneath the coast), the lithosphere is also rather thin (60 70 km), and the Poisson's ratio of the crust here appears relatively high. The historical Quanzhou Earthquake was probably correlated with the NE-striking Littoral Fault Zone, which is characterized by right-lateral slip and exhibiting the most active seismicity in the coastal area of Fujian. Meanwhile, tectonic

  16. Characteristics of strong ground motion generation areas by fully dynamic earthquake cycles

    NASA Astrophysics Data System (ADS)

    Galvez, P.; Somerville, P.; Ampuero, J. P.; Petukhin, A.; Yindi, L.

    2016-12-01

    During recent subduction zone earthquakes (2010 Mw 8.8 Maule and 2011 Mw 9.0 Tohoku), high frequency ground motion radiation has been detected in deep regions of seismogenic zones. By semblance analysis of wave packets, Kurahashi & Irikura (2013) found strong ground motion generation areas (SMGAs) located in the down dip region of the 2011 Tohoku rupture. To reproduce the rupture sequence of SMGA's and replicate their rupture time and ground motions, we extended previous work on dynamic rupture simulations with slip reactivation (Galvez et al, 2016). We adjusted stresses on the most southern SMGAs of Kurahashi & Irikura (2013) model to reproduce the observed peak ground velocity recorded at seismic stations along Japan for periods up to 5 seconds. To generate higher frequency ground motions we input the rupture time, final slip and slip velocity of the dynamic model into the stochastic ground motion generator of Graves & Pitarka (2010). Our results are in agreement with the ground motions recorded at the KiK-net and K-NET stations.While we reproduced the recorded ground motions of the 2011 Tohoku event, it is unknown whether the characteristics and location of SMGA's will persist in future large earthquakes in this region. Although the SMGA's have large peak slip velocities, the areas of largest final slip are located elsewhere. To elucidate whether this anti-correlation persists in time, we conducted earthquake cycle simulations and analysed the spatial correlation of peak slip velocities, stress drops and final slip of main events. We also investigated whether or not the SMGA's migrate to other regions of the seismic zone.To perform this study, we coupled the quasi-dynamic boundary element solver QDYN (Luo & Ampuero, 2015) and the dynamic spectral element solver SPECFEM3D (Galvez et al., 2014; 2016). The workflow alternates between inter-seismic periods solved with QDYN and coseismic periods solved with SPECFEM3D, with automated switch based on slip rate

  17. Mapping the rupture process of moderate earthquakes by inverting accelerograms

    Hellweg, M.; Boatwright, J.

    1999-01-01

    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints. Copyright 1999 by the American Geophysical Union.

  18. Diverse rupture processes in the 2015 Peru deep earthquake doublet.

    PubMed

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo; Zhan, Zhongwen; Duputel, Zacharie

    2016-06-01

    Earthquakes in deeply subducted oceanic lithosphere can involve either brittle or dissipative ruptures. On 24 November 2015, two deep (606 and 622 km) magnitude 7.5 and 7.6 earthquakes occurred 316 s and 55 km apart. The first event (E1) was a brittle rupture with a sequence of comparable-size subevents extending unilaterally ~50 km southward with a rupture speed of ~4.5 km/s. This earthquake triggered several aftershocks to the north along with the other major event (E2), which had 40% larger seismic moment and the same duration (~20 s), but much smaller rupture area and lower rupture speed than E1, indicating a more dissipative rupture. A minor energy release ~12 s after E1 near the E2 hypocenter, possibly initiated by the S wave from E1, and a clear aftershock ~165 s after E1 also near the E2 hypocenter, suggest that E2 was likely dynamically triggered. Differences in deep earthquake rupture behavior are commonly attributed to variations in thermal state between subduction zones. However, the marked difference in rupture behavior of the nearby Peru doublet events suggests that local variations of stress state and material properties significantly contribute to diverse behavior of deep earthquakes.

  19. The Implications of Strike-Slip Earthquake Source Properties on the Transform Boundary Development Process

    NASA Astrophysics Data System (ADS)

    Neely, J. S.; Huang, Y.; Furlong, K.

    2017-12-01

    Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in

  20. Keeping focus on earthquakes at school for seismic risk mitigation of the next generations

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Barnaba, Carla; Peruzza, Laura

    2013-04-01

    The knowledge of the seismic history of its own territory, the understanding of physical phenomena in response to an earthquake, the changes in the cultural heritage following a strong earthquake, the learning of actions to be taken during and after an earthquake, are piece of information that contribute to keep focus on the seismic hazard and to implement strategies for seismic risk mitigation. The training of new generations, today more than ever subject to rapid forgetting of past events, becomes therefore a key element to increase the perception that earthquakes happened and can happen at anytime and that mitigation actions are the only means to ensure the safety and to reduce damages and human losses. Since several years our institute (OGS) is involved in activities to raise awareness of education on earthquake. We aim to implement education programs with the goal of addressing a critical approach to seismic hazard reduction, differentiating the types of activities according to the age of the students. However, being such kind of activity unfunded, we can act at now only on a very limited number of schools per year. To be effective, the inclusion of the seismic risk issues in school curricula requires specific time and appropriate approaches when planning activities. For this reason, we involve also the teachers as proponents of activities and we encourage them to keep alive memories and discussion on earthquake in the classes. During the past years we acted mainly in the schools of the Friuli Venezia Giulia area (NE Italy), that is an earthquake prone area struck in 1976 by a destructive seismic event (Ms=6.5). We organized short training courses for teachers, we lectured classes, and we led laboratory activities with students. Indeed, being well known that students enjoy classes more when visual and active learning are joined, we propose a program that is composed by seminars, demonstrations and hands-on activities in the classrooms; for high school students

  1. Rupture processes of the 2010 Canterbury earthquake and the 2011 Christchurch earthquake inferred from InSAR, strong motion and teleseismic datasets

    NASA Astrophysics Data System (ADS)

    Yun, S.; Koketsu, K.; Aoki, Y.

    2014-12-01

    The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity

  2. The Cascadia Subduction Zone and related subduction systems: seismic structure, intraslab earthquakes and processes, and earthquake hazards

    Kirby, Stephen H.; Wang, Kelin; Dunlop, Susan

    2002-01-01

    The following report is the principal product of an international workshop titled “Intraslab Earthquakes in the Cascadia Subduction System: Science and Hazards” and was sponsored by the U.S. Geological Survey, the Geological Survey of Canada and the University of Victoria. This meeting was held at the University of Victoria’s Dunsmuir Lodge, Vancouver Island, British Columbia, Canada on September 18–21, 2000 and brought 46 participants from the U.S., Canada, Latin America and Japan. This gathering was organized to bring together active research investigators in the science of subduction and intraslab earthquake hazards. Special emphasis was given to “warm-slab” subduction systems, i.e., those systems involving young oceanic lithosphere subducting at moderate to slow rates, such as the Cascadia system in the U.S. and Canada, and the Nankai system in Japan. All the speakers and poster presenters provided abstracts of their presentations that were a made available in an abstract volume at the workshop. Most of the authors subsequently provided full articles or extended abstracts for this volume on the topics that they discussed at the workshop. Where updated versions were not provided, the original workshop abstracts have been included. By organizing this workshop and assembling this volume, our aim is to provide a global perspective on the science of warm-slab subduction, to thereby advance our understanding of internal slab processes and to use this understanding to improve appraisals of the hazards associated with large intraslab earthquakes in the Cascadia system. These events have been the most frequent and damaging earthquakes in western Washington State over the last century. As if to underscore this fact, just six months after this workshop was held, the magnitude 6.8 Nisqually earthquake occurred on February 28th, 2001 at a depth of about 55 km in the Juan de Fuca slab beneath the southern Puget Sound region of western Washington. The Governor

  3. Precise Relative Earthquake Depth Determination Using Array Processing Techniques

    NASA Astrophysics Data System (ADS)

    Florez, M. A.; Prieto, G. A.

    2014-12-01

    The mechanism for intermediate depth and deep earthquakes is still under debate. The temperatures and pressures are above the point where ordinary fractures ought to occur. Key to constraining this mechanism is the precise determination of hypocentral depth. It is well known that using depth phases allows for significant improvement in event depth determination, however routinely and systematically picking such phases for teleseismic or regional arrivals is problematic due to poor signal-to-noise ratios around the pP and sP phases. To overcome this limitation we have taken advantage of the additional information carried by seismic arrays. We have used beamforming and velocity spectral analysis techniques to precise measure pP-P and sP-P differential travel times. These techniques are further extended to achieve subsample accuracy and to allow for events where the signal-to-noise ratio is close to or even less than 1.0. The individual estimates obtained at different subarrays for a pair of earthquakes can be combined using a double-difference technique in order to precisely map seismicity in regions where it is tightly clustered. We illustrate these methods using data from the recent M 7.9 Alaska earthquake and its aftershocks, as well as data from the Bucaramanga nest in northern South America, arguably the densest and most active intermediate-depth earthquake nest in the world.

  4. Application of GPS Technologies to study Pre-earthquake processes. A review and future prospects

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Liu, J. Y. G.; Ouzounov, D.; Hernandez-Pajares, M.; Hattori, K.; Krankowski, A.; Zakharenkova, I.; Cherniak, I.

    2016-12-01

    We present the progress reached by the GPS TEC technologies in study of pre-seismic anomalies in the ionosphere appearing few days before the strong earthquakes. Starting from the first case studies such as 17 August 1999 M7.6 Izmit earthquake in Turkey the technology has been developed and converted into the global near real-time monitoring of seismo-ionospheric effects which is used now in the multiparameter nowcast and forecast of the strong earthquakes. Development of the techniques of the seismo-ionospheric anomalies identification was carried out in parallel with the development of the physical mechanism explaining these anomalies generation. It was established that the seismo-ionospheric anomalies have a self-similarity property, are dependent on the local time and are persistent at least for 4 hours, deviation from undisturbed level could be both positive and negative depending on the leading time (in days) to the moment of impending earthquake and from longitude of anomaly in relation to the epicenter longitude. Low latitude and near equatorial earthquakes demonstrate the magnetically conjugated effect, while the middle and high latitude earthquakes demonstrate the single anomaly over the earthquake preparation zone. From the anomalies morphology the physical mechanism was derived within the framework of the more complex Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling concept. In addition to the multifactor analysis of the GPS TEC time series the GIM MAP technology was applied also clearly showing the seismo-ionospheric anomalies locality and their spatial size correspondence to the Dobrovolsky determination of the earthquake preparation zone radius. Application of ionospheric tomography techniques permitted to study not only the total electron content variations but also the modification of the vertical distribution of electron concentration in the ionosphere before earthquakes. The statistical check of the ionospheric precursors passed the

  5. Possible Mechanisms for Generation of Anomalously High PGA During the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Pavlenko, O. V.

    2017-08-01

    Mechanisms are suggested that could explain anomalously high PGAs (peak ground accelerations) exceeding 1 g recorded during the 2011 Tohoku earthquake ( M w = 9.0). In my previous research, I studied soil behavior during the Tohoku earthquake based on KiK-net vertical array records and revealed its `atypical' pattern: instead of being reduced in the near-source zones as usually observed during strong earthquakes, shear moduli in soil layers increased, indicating soil hardening, and reached their maxima at the moments of the highest intensity of strong motion, then reduced. We could explain this assuming that the soils experienced some additional compression. The observed changes in the shapes of acceleration time histories with distance from the source, such as a decrease of the duration and an increase of the intensity of strong motion, indicate phenomena similar to overlapping of seismic waves and a shock wave generation, which led to the compression of soils. The phenomena reach their maximum in the vicinity of stations FKSH10, TCGH16, and IBRH11, where the highest PGAs were recorded; at larger epicentral distances, PGAs sharply fall. Thus, the occurrence of anomalously high PGAs on the surface can result from the combination of the overlapping of seismic waves at the bottoms of soil layers and their increased amplification by the pre-compressed soils.

  6. Understanding continental megathrust earthquake potential through geological mountain building processes: an example in Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Zhang, Zhen; Wang, Liangshu; Leroy, Yves; shi, Yaolin

    2017-04-01

    How to reconcile continent megathrust earthquake characteristics, for instances, mapping the large-great earthquake sequences into geological mountain building process, as well as partitioning the seismic-aseismic slips, is fundamental and unclear. Here, we scope these issues by focusing a typical continental collisional belt, the great Nepal Himalaya. We first prove that refined Nepal Himalaya thrusting sequences, with accurately defining of large earthquake cycle scale, provide new geodynamical hints on long-term earthquake potential in association with, either seismic-aseismic slip partition up to the interpretation of the binary interseismic coupling pattern on the Main Himalayan Thrust (MHT), or the large-great earthquake classification via seismic cycle patterns on MHT. Subsequently, sequential limit analysis is adopted to retrieve the detailed thrusting sequences of Nepal Himalaya mountain wedge. Our model results exhibit apparent thrusting concentration phenomenon with four thrusting clusters, entitled as thrusting 'families', to facilitate the development of sub-structural regions respectively. Within the hinterland thrusting family, the total aseismic shortening and the corresponding spatio-temporal release pattern are revealed by mapping projection. Whereas, in the other three families, mapping projection delivers long-term large (M<8)-great (M>8) earthquake recurrence information, including total lifespans, frequencies and large-great earthquake alternation information by identifying rupture distances along the MHT. In addition, this partition has universality in continental-continental collisional orogenic belt with identified interseismic coupling pattern, while not applicable in continental-oceanic megathrust context.

  7. Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view

    NASA Astrophysics Data System (ADS)

    Locati, Mario; Rovida, Andrea; Albini, Paola

    2014-05-01

    Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.

  8. Acceleration and volumetric strain generated by the Parkfield 2004 earthquake on the GEOS strong-motion array near Parkfield, California

    Borcherdt, Rodger D.; Johnston, Malcolm J.S.; Dietel, Christopher; Glassmoyer, Gary; Myren, Doug; Stephens, Christopher

    2004-01-01

    An integrated array of 11 General Earthquake Observation System (GEOS) stations installed near Parkfield, CA provided on scale broad-band, wide-dynamic measurements of acceleration and volumetric strain of the Parkfield earthquake (M 6.0) of September 28, 2004. Three component measurements of acceleration were obtained at each of the stations. Measurements of collocated acceleration and volumetric strain were obtained at four of the stations. Measurements of velocity at most sites were on scale only for the initial P-wave arrival. When considered in the context of the extensive set of strong-motion recordings obtained on more than 40 analog stations by the California Strong-Motion Instrumentation Program (Shakal, et al., 2004 http://www.quake.ca.gov/cisn-edc) and those on the dense array of Spudich, et al, (1988), these recordings provide an unprecedented document of the nature of the near source strong motion generated by a M 6.0 earthquake. The data set reported herein provides the most extensive set of near field broad band wide dynamic range measurements of acceleration and volumetric strain for an earthquake as large as M 6 of which the authors are aware. As a result considerable interest has been expressed in these data. This report is intended to describe the data and facilitate its use to resolve a number of scientific and engineering questions concerning earthquake rupture processes and resultant near field motions and strains. This report provides a description of the array, its scientific objectives and the strong-motion recordings obtained of the main shock. The report provides copies of the uncorrected and corrected data. Copies of the inferred velocities, displacements, and Psuedo velocity response spectra are provided. Digital versions of these recordings are accessible with information available through the internet at several locations: the National Strong-Motion Program web site (http://agram.wr.usgs.gov/), the COSMOS Virtual Data Center Web site

  9. Real-time GPS integration for prototype earthquake early warning and near-field imaging of the earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.

    2011-12-01

    Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very

  10. Earthquake rupture process recreated from a natural fault surface

    Parsons, Thomas E.; Minasian, Diane L.

    2015-01-01

    What exactly happens on the rupture surface as an earthquake nucleates, spreads, and stops? We cannot observe this directly, and models depend on assumptions about physical conditions and geometry at depth. We thus measure a natural fault surface and use its 3D coordinates to construct a replica at 0.1 m resolution to obviate geometry uncertainty. We can recreate stick-slip behavior on the resulting finite element model that depends solely on observed fault geometry. We clamp the fault together and apply steady state tectonic stress until seismic slip initiates and terminates. Our recreated M~1 earthquake initiates at contact points where there are steep surface gradients because infinitesimal lateral displacements reduce clamping stress most efficiently there. Unclamping enables accelerating slip to spread across the surface, but the fault soon jams up because its uneven, anisotropic shape begins to juxtapose new high-relief sticking points. These contacts would ultimately need to be sheared off or strongly deformed before another similar earthquake could occur. Our model shows that an important role is played by fault-wall geometry, though we do not include effects of varying fluid pressure or exotic rheologies on the fault surfaces. We extrapolate our results to large fault systems using observed self-similarity properties, and suggest that larger ruptures might begin and end in a similar way, though the scale of geometrical variation in fault shape that can arrest a rupture necessarily scales with magnitude. In other words, fault segmentation may be a magnitude dependent phenomenon and could vary with each subsequent rupture.

  11. Thermodynamic method for generating random stress distributions on an earthquake fault

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  12. Earthquake Forecasting Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B. M.; Nava Pichardo, F. A.; Glowacka, E.; Gomez-Trevino, E.

    2015-12-01

    Large earthquakes have semi-periodic behavior as result of critically self-organized processes of stress accumulation and release in some seismogenic region. Thus, large earthquakes in a region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. Nava et al., 2013 and Quinteros et al., 2013 realized that not all earthquakes in a given region need belong to the same sequence, since there can be more than one process of stress accumulation and release in it; they also proposed a method to identify semi-periodic sequences through analytic Fourier analysis. This work presents improvements on the above-mentioned method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification, which means that earthquake occurrence times are treated as a labeled point process; the estimation of appropriate upper limit uncertainties to use in forecasts; and the use of Bayesian analysis to evaluate the forecast performance. This improved method is applied to specific regions: the southwestern coast of Mexico, the northeastern Japan Arc, the San Andreas Fault zone at Parkfield, and northeastern Venezuela.

  13. The 40 anniversary of the 1976 Friuli earthquake: a look back for empowering the next generation to the reduction of seismic risk

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Barnaba, Carla; Peruzza, Laura

    2016-04-01

    On 6 May 1976 an Ms=6.5 earthquake struck the Friuli area (NE Italy), causing about 1,000 casualties, and widespread destruction. Such event is the largest so far recorded in Northern Italy. After 40 years, the memory of a devastating earthquake remains in the urbanization, and in the people that lived that dreadful experience. However, the memories tend to vanish with the quake survivors demise and the celebration of the anniversary become a good opportunity to refresh the earthquake history, and the awareness of living in a seismic prone area. As seismologists, we believe that the seismic risk reduction starts from the education of the next generation. For this reason, we decided to celebrate the 40 anniversary planning a special educational campaign, mainly devoted to the schools and the young people, but it will give us the opportunity to check and, if necessary to raise, the level of seismic awareness of the local communities. The activities started on May 2015, with labs and lessons held in some schools, and the creation of a blog (https://versoi40anni.wordpress.com) to collect news, photos, video and all the materials related to the campaign. From February to May 2016, one day per week, we will open our seismological lab to the school visits, so that students can meet the seismologists, and we will cooperate with local science museums to enlarge the training offers on the earthquake topics. By continuing the efforts of our previous educational projects, the students of a school located in Gemona del Friuli, one of the small town destroyed by the 1976 earthquake, will be deeply involved in experimental activities, like seismic noise measurements for microzonation studies, so to be an active part of the seismic mitigation process. This and some other activities developed for the celebration of the 40 anniversary of the Friuli earthquake will be illustrated in this presentation.

  14. Southern California Earthquake Center/Undergraduate Studies in Earthquake Information Technology (SCEC/UseIT): Towards the Next Generation of Internship

    NASA Astrophysics Data System (ADS)

    Perry, S.; Benthien, M.; Jordan, T. H.

    2005-12-01

    The SCEC/UseIT internship program is training the next generation of earthquake scientist, with methods that can be adapted to other disciplines. UseIT interns work collaboratively, in multi-disciplinary teams, conducting computer science research that is needed by earthquake scientists. Since 2002, the UseIT program has welcomed 64 students, in some two dozen majors, at all class levels, from schools around the nation. Each summer''s work is posed as a ``Grand Challenge.'' The students then organize themselves into project teams, decide how to proceed, and pool their diverse talents and backgrounds. They have traditional mentors, who provide advice and encouragement, but they also mentor one another, and this has proved to be a powerful relationship. Most begin with fear that their Grand Challenge is impossible, and end with excitement and pride about what they have accomplished. The 22 UseIT interns in summer, 2005, were primarily computer science and engineering majors, with others in geology, mathematics, English, digital media design, physics, history, and cinema. The 2005 Grand Challenge was to "build an earthquake monitoring system" to aid scientists who must visualize rapidly evolving earthquake sequences and convey information to emergency personnel and the public. Most UseIT interns were engaged in software engineering, bringing new datasets and functionality to SCEC-VDO (Virtual Display of Objects), a 3D visualization software that was prototyped by interns last year, using Java3D and an extensible, plug-in architecture based on the Eclipse Integrated Development Environment. Other UseIT interns used SCEC-VDO to make animated movies, and experimented with imagery in order to communicate concepts and events in earthquake science. One movie-making project included the creation of an assessment to test the effectiveness of the movie''s educational message. Finally, one intern created an interactive, multimedia presentation of the UseIT program.

  15. Analogue modelling of the rupture process of vulnerable stalagmites in an earthquake simulator

    NASA Astrophysics Data System (ADS)

    Gribovszki, Katalin; Bokelmann, Götz; Kovács, Károly; Hegymegi, Erika; Esterhazy, Sofi; Mónus, Péter

    2017-04-01

    Earthquakes hit urban centers in Europe infrequently, but occasionally with disastrous effects. Obtaining an unbiased view of seismic hazard is therefore very important. In principle, the best way to test Probabilistic Seismic Hazard Assessments (PSHA) is to compare them with observations that are entirely independent of the procedure used to produce PSHA models. Arguably, the most valuable information in this context should be information on long-term hazard, namely maximum intensities (or magnitudes) occurring over time intervals that are at least as long as a seismic cycle. Long-term information can in principle be gained from intact and vulnerable stalagmites in natural caves. These formations survived all earthquakes that have occurred, over thousands of years - depending on the age of the stalagmite. Their "survival" requires that the horizontal ground acceleration has never exceeded a certain critical value within that time period. To determine this critical value for the horizontal ground acceleration more precisely we need to understand the failure process of these intact and vulnerable stalagmites. More detailed information of the vulnerable stalagmites' rupture is required, and we have to know how much it depends on the shape and the substance of the investigated stalagmite. Predicting stalagmite failure limits using numerical modelling is faced with a number of approximations, e.g. from generating a manageable digital model. Thus it seemed reasonable to investigate the problem by analogue modelling as well. The advantage of analogue modelling among other things is that nearly real circumstances can be produced by simple and quick laboratory methods. The model sample bodies were made from different types of concrete and were cut out from real broken stalagmites originated from the investigated caves. These bodies were reduced-scaled with similar shape as the original, investigated stalagmites. During the measurements we could change both the shape and

  16. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  17. Temporal and spatial heterogeneity of rupture process application in shakemaps of Yushu Ms7.1 earthquake, China

    NASA Astrophysics Data System (ADS)

    Kun, C.

    2015-12-01

    Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.

  18. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  19. Tsunami waves generated by dynamically triggered aftershocks of the 2010 Haiti earthquake

    NASA Astrophysics Data System (ADS)

    Ten Brink, U. S.; Wei, Y.; Fan, W.; Miller, N. C.; Granja, J. L.

    2017-12-01

    Dynamically-triggered aftershocks, thought to be set off by the passage of surface waves, are currently not considered in tsunami warnings, yet may produce enough seafloor deformation to generate tsunamis on their own, as judged from new findings about the January 12, 2010 Haiti earthquake tsunami in the Caribbean Sea. This tsunami followed the Mw7.0 Haiti mainshock, which resulted from a complex rupture along the north shore of Tiburon Peninsula, not beneath the Caribbean Sea. The mainshock, moreover, had a mixed strike-slip and thrust focal mechanism. There were no recorded aftershocks in the Caribbean Sea, only small coastal landslides and rock falls on the south shore of Tiburon Peninsula. Nevertheless, a tsunami was recorded on deep-sea DART buoy 42407 south of the Dominican Republic and on the Santo Domingo tide gauge, and run-ups of ≤3 m were observed along a 90-km-long stretch of the SE Haiti coast. Three dynamically-triggered aftershocks south of Haiti have been recently identified within the coda of the mainshock (<200 s) by analyzing P wave arrivals recorded by dense seismic arrays, parsing the arrivals into 20-s-long stacks, and back-projecting the arrivals to the vicinity of the main shock (50-300 km). Two of the aftershocks, coming 20-40 s and 40-60 s after the mainshock, plot along NW-SE-trending submarine ridges in the Caribbean Sea south of Haiti. The third event, 120-140 s was located along the steep eastern slope of Bahoruco Peninsula, which is delineated by a normal fault. Forward tsunami models show that the arrival times of the DART buoy and tide gauge times are best fit by the earliest of the three aftershocks, with a Caribbean source 60 km SW of the mainshock rupture zone. Preliminary inversion of the DART buoy time series for fault locations and orientations confirms the location of the first source, but requires an additional unidentified source closer to shore 40 km SW of the mainshock rupture zone. This overall agreement between

  20. Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Hansen, R. A.

    2007-05-01

    The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.

  1. Slow Unlocking Processes Preceding the 2015 Mw 8.4 Illapel, Chile, Earthquake

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Meng, Lingsen

    2018-05-01

    On 16 September 2015, the Mw 8.4 Illapel earthquake occurred in central Chile with no intense foreshock sequences documented in the regional earthquake catalog. Here we employ the matched-filter technique based on an enhanced template data set of previously catalogued events. We perform a continuous search over an 4-year period before the Illapel mainshock to recover the uncatalogued small events and repeating earthquakes. Repeating earthquakes are found both to the north and south of the mainshock rupture zone. To the south of the rupture zone, the seismicity and repeater-inferred aseismic slip progressively accelerate around the Illapel epicenter starting from 140 days before the mainshock. This may indicate an unlocking process involving the interplay of seismic and aseismic slip. The acceleration culminates in a M 5.3 event of low-angle thrust mechanism, which occurred 36 days before the Mw 8.4 mainshock. It is then followed by a relative quiescence in seismicity until the mainshock occurred. This quiescence might correspond to an intermediate period of stable slip before rupture initiation. In addition, to the north of the mainshock rupture area, the last aseismic-slip episode occurs within 175-95 days before the mainshock and accumulates the largest amount of slip in the observation period. The simultaneous occurrence of aseismic-slip transients over a large area is consistent with large-scale slow unlocking processes preceding the Illapel mainshock.

  2. Contribution of Satellite Gravimetry to Understanding Seismic Source Processes of the 2011 Tohoku-Oki Earthquake

    NASA Technical Reports Server (NTRS)

    Han, Shin-Chan; Sauber, Jeanne; Riva, Riccardo

    2011-01-01

    The 2011 great Tohoku-Oki earthquake, apart from shaking the ground, perturbed the motions of satellites orbiting some hundreds km away above the ground, such as GRACE, due to coseismic change in the gravity field. Significant changes in inter-satellite distance were observed after the earthquake. These unconventional satellite measurements were inverted to examine the earthquake source processes from a radically different perspective that complements the analyses of seismic and geodetic ground recordings. We found the average slip located up-dip of the hypocenter but within the lower crust, as characterized by a limited range of bulk and shear moduli. The GRACE data constrained a group of earthquake source parameters that yield increasing dip (7-16 degrees plus or minus 2 degrees) and, simultaneously, decreasing moment magnitude (9.17-9.02 plus or minus 0.04) with increasing source depth (15-24 kilometers). The GRACE solution includes the cumulative moment released over a month and demonstrates a unique view of the long-wavelength gravimetric response to all mass redistribution processes associated with the dynamic rupture and short-term postseismic mechanisms to improve our understanding of the physics of megathrusts.

  3. Earthquakes in Fiordland, Southern Chile: Initiation and Development of a Magmatic Process

    NASA Astrophysics Data System (ADS)

    Barrientos, S.; Service, N. S.

    2007-05-01

    Several efforts in Chile are being conducted in relation to geophysical monitoring with the objective of disaster mitigation. A long and permanent monitoring effort along the country has been the continuous effort resulting in the recognition and delineation of new seismogenic sources. Here we report on the seismo-volcanic crisis that is currently taking place in the in the region close to the triple junction (Nazca, Antarctica and South America) in southern Chile at around latitude 45°S. On January 22, 2007, an intensity V-VI (MMI) earthquake shook the cities of Puerto Aysén, Puerto Chacabuco and Coyhaique. This magnitude 5 event, was the first of a series of earthquakes that have taken place in the region for nearly a month and a half (until end of February, time when this abstract was written). The closest station to the source area -part of the GEOSCOPE network located in Coyhaique, about 80 km away from the epicenters- reveals seismic activity about 3 hours before the first event. Immediately after the first event, more than 20 events per hour were detected and recorded by this station, rate which decreased with time with the exception of those time intervals following larger events. More than six events with magnitude 5 or more have been recorded. Five seismic stations were installed surrounding the epicentral area between 27 - 29 January and are currently operational. After processing some of the recorded events, a sixth station was installed at the closest possible site of the source of the seismic activity. Preliminary analysis of the recorded seismic activity reveals a concentration of hypocenters - 5 to 10 km depth- along an eight-km NNE-SSW vertical plane crossing the Aysén fiord. Harmonic tremor has also been detected. This seismic activity is interpreted as the result of a magmatic process in progress which will most likely culminate in the generation of a new underwater volcanic edifice. Because the seismic activity fully extends across the Ays

  4. HF radar detection of infrasonic waves generated in the ionosphere by the 28 March 2005 Sumatra earthquake

    NASA Astrophysics Data System (ADS)

    Bourdillon, Alain; Occhipinti, Giovanni; Molinié, Jean-Philippe; Rannou, Véronique

    2014-03-01

    Surface waves generated by earthquakes create atmospheric waves detectable in the ionosphere using radio waves techniques: i.e., HF Doppler sounding, GPS and altimeter TEC measurements, as well as radar measurements. We present observations performed with the over-the-horizon (OTH) radar NOSTRADAMUS after the very strong earthquake (M=8.6) that occurred in Sumatra on March 28, 2005. An original method based on the analysis of the RTD (Range-Time-Doppler) image is suggested to identify the multi-chromatic ionospheric signature of the Rayleigh wave. The proposed method presents the advantage to preserve the information on the range variation and time evolution, and provides comprehensive results, as well as easy identification of the waves. In essence, a Burg algorithm of order 1 is proposed to compute the Doppler shift of the radar signal, resulting in sensitivity as good as obtained with higher orders. The multi-chromatic observation of the ionospheric signature of Rayleigh wave allows to extrapolate information coherent with the dispersion curve of Rayleigh waves, that is, we observe two components of the Rayleigh waves with estimated group velocities of 3.8 km/s and 3.6 km/s associated to 28 mHz (T~36 s) and 6.1 mHz (T~164 s) waves, respectively. Spectral analysis of the RTD image reveals anyway the presence of several oscillations at frequencies between 3 and 8 mHz clearly associated to the transfer of energy from the solid-Earth to the atmosphere, and nominally described by the normal modes theory for a complete planet with atmosphere. Oscillations at frequencies larger than 8 mHz are also observed in the spectrum but with smaller amplitudes. Particular attention is pointed out to normal modes 0S29 and 0S37 which are strongly involved in the coupling process. As the proposed method is frequency free, it could be used not only for detection of ionospheric perturbations induced by earthquakes, but also by other natural phenomena as well as volcanic explosions and

  5. Automated Sequence Generation Process and Software

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    "Automated sequence generation" (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences.

  6. Source processes of strong earthquakes in the North Tien-Shan region

    NASA Astrophysics Data System (ADS)

    Kulikova, G.; Krueger, F.

    2013-12-01

    Tien-Shan region attracts attention of scientists worldwide due to its complexity and tectonic uniqueness. A series of very strong destructive earthquakes occurred in Tien-Shan at the turn of XIX and XX centuries. Such large intraplate earthquakes are rare in seismology, which increases the interest in the Tien-Shan region. The presented study focuses on the source processes of large earthquakes in Tien-Shan. The amount of seismic data is limited for those early times. In 1889, when a major earthquake has occurred in Tien-Shan, seismic instruments were installed in very few locations in the world and these analog records did not survive till nowadays. Although around a hundred seismic stations were operating at the beginning of XIX century worldwide, it is not always possible to get high quality analog seismograms. Digitizing seismograms is a very important step in the work with analog seismic records. While working with historical seismic records one has to take into account all the aspects and uncertainties of manual digitizing and the lack of accurate timing and instrument characteristics. In this study, we develop an easy-to-handle and fast digitization program on the basis of already existing software which allows to speed up digitizing process and to account for all the recoding system uncertainties. Owing to the lack of absolute timing for the historical earthquakes (due to the absence of a universal clock at that time), we used time differences between P and S phases to relocate the earthquakes in North Tien-Shan and the body-wave amplitudes to estimate their magnitudes. Combining our results with geological data, five earthquakes in North Tien-Shan were precisely relocated. The digitizing of records can introduce steps into the seismograms which makes restitution (removal of instrument response) undesirable. To avoid the restitution, we simulated historic seismograph recordings with given values for damping and free period of the respective instrument and

  7. Impact of Earthquake Preperation Process On Hydrodeformation Field Evolution In The Caucasus

    NASA Astrophysics Data System (ADS)

    Melikadze, G.; Aliev, A.; Bendukidze, G.; Biagi, P. F.; Garalov, B.; Mirianashvili, V.

    The paper studies relation between geodeformation regime variations of underground water observed in boreholes and deformation processes in the Earth crust, asso- ciated with formation of earthquakes with M=3 and higher. Monitoring of hydro- geodeformation field (HGDF) has been carried out thanks to the on-purpose gen- eral network of Armenia, Azerbaijan, Georgia and Russia. The wells are uniformly distributed throughout the Caucasus and cover all principal geological blocks of the region. The paper deals with results associated with several earthquakes occured in Georgia and one in Azerbaijan. As the network comprises boreholes of different depths, varying from 250 m down to 3,500 m, preliminary calibration of the boreholes involved was carried out, based on evaluation of the water level variation due to known Earth tide effect. This was necessary for sensitivity evaluation and normalization of hydro-dynamic signals. Obtained data have been processed by means of spectral anal- ysis to dissect background field of disturbances from the valid signal. The processed data covered the period of 1991-1993 comprising the following 4 strong earthquakes of the Caucasus, namely in: Racha (1991, M=6.9), Java (1991, M=6.2), Barisakho (1992, M=6.5) and Talish (1993, M=5.6). Formation of the compression zone in the east Caucasus and that of extension in the western Georgia and north Caucasus was observed 7 months prior to Racha quake. Boundary between the above 2 zones passed along the known submeridional fault. The area where maximal gradient was observed, coincided with the joint of deep faults and appeared to be the place for origination of the earthquake. After the quake occurred, the zone of maximal gradient started to mi- grate towards East and residual deformations in HGDF have outlined source first of Java quake (on 15.06.1991), than that of Barisakho (on 23.10.1992) and Talish (on 2.10.1993) ones. Thus, HGDF indicated migration of the deformation field along the slope of

  8. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    PubMed

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the dual process model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the dual process model.

  9. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    PubMed

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.

  10. Geological process of the slow earthquakes -A hypothesis from an ancient plate boundary fault rock

    NASA Astrophysics Data System (ADS)

    Kitamura, Y.; Kimura, G.; Kawabata, K.

    2012-12-01

    We present an integrated model of the deformation along the subduction plate boundary from the trench to the seismogenic zone. Over years of field based research in the Shimanto Belt accretionary complex, southwest Japan, yielded breaking-through discoveries on plate boundary processes, for example, the first finding of pseudotachylyte in the accretionary prism (Ikesawa et al., 2003). Our aim here is to unveil the geological aspects of slow earthquakes and the related plate boundary processes. Studied tectonic mélanges in the Shimanto Belt are regarded as fossils of plate boundary fault zone in subduction zone. We traced material from different depths along subduction channel using samples from on-land outcrops and ocean drilling cores. As a result, a series of progressive deformation down to the down-dip limit of the seismogenic zone was revealed. Detailed geological survey and structural analyses enabled us to separate superimposed deformation events during subduction. Material involved in the plate boundary deformation is mainly an alternation of sand and mud. As they have different competency and are suffered by simple shear stress field, sandstones break apart in flowing mudstones. We distinguished several stages of these deformations in sandstones and recognized progress in the intensity of deformation with increment of underthrusting. It is also known that the studied Mugi mélange bears pseudotachylyte in its upper bounding fault. Our conclusion illustrates that the subduction channel around the depth of the seismogenic zone forms a thick plate boundary fault zone, where there is a clear segregation in deformation style: a fast and episodic slip at the upper boundary fault and a slow and continuous deformation within the zone. The former fast deformation corresponds to the plate boundary earthquakes and the latter to the slow earthquakes. We further examined numerically whether this plate boundary fault rock is capable of releasing seismic moment enough to

  11. Preliminary Analysis of Remote Triggered Seismicity in Northern Baja California Generated by the 2011, Tohoku-Oki, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Wong-Ortega, V.; Castro, R. R.; Gonzalez-Huizar, H.; Velasco, A. A.

    2013-05-01

    We analyze possible variations of seismicity in the northern Baja California due to the passage of seismic waves from the 2011, M9.0, Tohoku-Oki, Japan earthquake. The northwestern area of Baja California is characterized by a mountain range composed of crystalline rocks. These Peninsular Ranges of Baja California exhibits high microseismic activity and moderate size earthquakes. In the eastern region of Baja California shearing between the Pacific and the North American plates takes place and the Imperial and Cerro-Prieto faults generate most of the seismicity. The seismicity in these regions is monitored by the seismic network RESNOM operated by the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE). This network consists of 13 three-component seismic stations. We use the seismic catalog of RESNOM to search for changes in local seismic rates occurred after the passing of surface waves generated by the Tohoku-Oki, Japan earthquake. When we compare one month of seismicity before and after the M9.0 earthquake, the preliminary analysis shows absence of triggered seismicity in the northern Peninsular Ranges and an increase of seismicity south of the Mexicali valley where the Imperial fault jumps southwest and the Cerro Prieto fault continues.

  12. Rupture process of large earthquakes in the northern Mexico subduction zone

    NASA Astrophysics Data System (ADS)

    Ruff, Larry J.; Miller, Angus D.

    1994-03-01

    The Cocos plate subducts beneath North America at the Mexico trench. The northernmost segment of this trench, between the Orozco and Rivera fracture zones, has ruptured in a sequence of five large earthquakes from 1973 to 1985; the Jan. 30, 1973 Colima event ( M s 7.5) at the northern end of the segment near Rivera fracture zone; the Mar. 14, 1979 Petatlan event ( M s 7.6) at the southern end of the segment on the Orozco fracture zone; the Oct. 25, 1981 Playa Azul event ( M s 7.3) in the middle of the Michoacan “gap”; the Sept. 19, 1985 Michoacan mainshock ( M s 8.1); and the Sept. 21, 1985 Michoacan aftershock ( M s 7.6) that reruptured part of the Petatlan zone. Body wave inversion for the rupture process of these earthquakes finds the best: earthquake depth; focal mechanism; overall source time function; and seismic moment, for each earthquake. In addition, we have determined spatial concentrations of seismic moment release for the Colima earthquake, and the Michoacan mainshock and aftershock. These spatial concentrations of slip are interpreted as asperities; and the resultant asperity distribution for Mexico is compared to other subduction zones. The body wave inversion technique also determines the Moment Tensor Rate Functions; but there is no evidence for statistically significant changes in the moment tensor during rupture for any of the five earthquakes. An appendix describes the Moment Tensor Rate Functions methodology in detail. The systematic bias between global and regional determinations of epicentral locations in Mexico must be resolved to enable plotting of asperities with aftershocks and geographic features. We have spatially “shifted” all of our results to regional determinations of epicenters. The best point source depths for the five earthquakes are all above 30 km, consistent with the idea that the down-dip edge of the seismogenic plate interface in Mexico is shallow compared to other subduction zones. Consideration of uncertainties in

  13. Source process of the Sikkim earthquake 18th September, 2011, inferred from teleseismic body-wave inversion.

    NASA Astrophysics Data System (ADS)

    Earnest, A.; Sunil, T. C.

    2014-12-01

    The recent earthquake of Mw 6.9 occurred on September 18, 2011 in Sikkim-Nepal border region. The hypocenter parameters determined by the Indian Meteorological Department shows that the epicentre is at 27.7°N, 88.2°E and focal depth of 58Km, located closed to the north-western terminus of Tista lineament. The reported aftershocks are linearly distributed in between Tista and Golapara lineament. The microscopic and geomorphologic studies infer a dextral strike-slip faulting, possibly along a NW-SE oriented fault. Landslides caused by this earthquake are distributed along Tista lineament . On the basis of the aftershock distribution, Kumar et al. (2012), have suggested possible NW orientation of the causative fault plane. The epicentral region of Sikkim bordered by Nepal, Bhutan and Tibet, comprises a segment of relatively lower level seismicity in the 2500km stretch of the active Himalayan Belt. The north Sikkim earthquake was felt in most parts of Sikkim and eastern Nepal; it killed more than 100 people and caused damage to buildings, roads and communication infrastructure. Through this study we focus on the earthquake source parameters and the kinematic rupture process of this particular event. We used tele-seismic body waveformsto determine the rupture pattern of earthquake. The seismic-rupture pattern are generally complex, and the result could be interpreted in terms of a distribution of asperities and barriers on the particular fault plane (Kikuchi and Kanamori, 1991).The methodology we adopted is based on the teleseismic body wave inversion methodology by Kikuchi and Kanamori (1982, 1986 and 1991). We used tele-seismic P-wave records observed at teleseismic distances between 50° and 90° with a good signal to noise ratio. Teleseismic distances in the range between 50° and 90° were used, in order to avoid upper mantle and core triplications and to limit the path length within the crust. Synthetic waveforms were generated gives a better fit with triangular

  14. Radiated energy and the rupture process of the Denali fault earthquake sequence of 2002 from broadband teleseismic body waves

    Choy, G.L.; Boatwright, J.

    2004-01-01

    Displacement, velocity, and velocity-squared records of P and SH body waves recorded at teleseismic distances are analyzed to determine the rupture characteristics of the Denali fault, Alaska, earthquake of 3 November 2002 (MW 7.9, Me 8.1). Three episodes of rupture can be identified from broadband (???0.1-5.0 Hz) waveforms. The Denali fault earthquake started as a MW 7.3 thrust event. Subsequent right-lateral strike-slip rupture events with centroid depths of 9 km occurred about 22 and 49 sec later. The teleseismic P waves are dominated by energy at intermediate frequencies (0.1-1 Hz) radiated by the thrust event, while the SH waves are dominated by energy at lower frequencies (0.05-0.2 Hz) radiated by the strike-slip events. The strike-slip events exhibit strong directivity in the teleseismic SH waves. Correcting the recorded P-wave acceleration spectra for the effect of the free surface yields an estimate of 2.8 ?? 1015 N m for the energy radiated by the thrust event. Correcting the recorded SH-wave acceleration spectra similarly yields an estimate of 3.3 ?? 10 16 N m for the energy radiated by the two strike-slip events. The average rupture velocity for the strike-slip rupture process is 1.1??-1.2??. The strike-slip events were located 90 and 188 km east of the epicenter. The rupture length over which significant or resolvable energy is radiated is, thus, far shorter than the 340-km fault length over which surface displacements were observed. However, the seismic moment released by these three events, 4 ?? 1020 N m, was approximately half the seismic moment determined from very low-frequency analyses of the earthquake. The difference in seismic moment can be reasonably attributed to slip on fault segments that did not radiate significant or coherent seismic energy. These results suggest that very large and great strike-slip earthquakes can generate stress pulses that rapidly produce substantial slip with negligible stress drop and little discernible radiated

  15. Large-scale unloading processes preceding the 2015 Mw 8.4 Illapel, Chile earthquake

    NASA Astrophysics Data System (ADS)

    Huang, H.; Meng, L.

    2017-12-01

    Foreshocks and/or slow slip are observed to accelerate before some recent large earthquakes. However, it is still controversial regarding the universality of precursory signals and their value in hazard assessment or mitigation. On 16 September 2015, the Mw 8.4 Illapel earthquake ruptured a section of the subduction thrust on the west coast of central Chile. Small earthquakes are important in resolving possible precursors but are often incomplete in routine catalogs. Here, we employ the matched filter technique to recover the undocumented small events in a 4-years period before the Illapel mainshock. We augment the template dataset from Chilean Seismological Center (CSN) with previously found new repeating aftershocks in the study area. We detect a total of 17658 events in the 4-years period before the mainshock, 6.3 times more than the CSN catalog. The magnitudes of detected events are determined according to different magnitude-amplitude relations estimated at different stations. Among the enhanced catalog, 183 repeating earthquakes are identified before the mainshock. Repeating earthquakes are located at both the northern and southern sides of the principal coseismic slip zone. The seismicity and aseismic slip progressively accelerate in a small low-coupling area around the epicenter starting from 140 days before the mainshock. The acceleration leads to a M 5.3 event 36 days before the mainshock, then followed by a relative quiescence in both seismicity and slow slip until the mainshock. This may correspond to a slow aseismic nucleation phase after the slow-slip transient ends. In addition, to the north of the mainshock rupture area, the last aseismic-slip episode occurs within 175-95 days before the mainshock and accumulates the largest amount of slip in the observation period. The simultaneous occurrence of slow slip over a large area indicates a large-scale unloading process preceding the mainshock. In contrast, in a region 70-150 km south of the mainshock

  16. Intermediate-depth earthquake generation: what hydrous minerals can tell us

    NASA Astrophysics Data System (ADS)

    Deseta, N.; Ashwal, L.; Andersen, T. B.

    2012-04-01

    Subduction zone seismicity has commonly been causally related to the dehydration of minerals within the subducting slab(Hacker et al. 2004, Jung et al. (2004), Dobson et al. 2002, Rondenay et al. 2008). Other models for release of intermediate- and deep earthquakes include spontaneous reaction(s) affecting large rock-bodies along overstepped phase boundaries ( e.g. Green and Houston, 1995) and various shear heating-localization models (e.g. Kelemen and Hirth 2007, John et al. 2009). These concepts are principally reliant on seismic and thermo-petrological modeling; both of which are indirect methods of analysis. Recent discoveries of pseudotachylytes (PST) formed under high pressure conditions (Ivrea-Verbano Zone, Italy, Western Gneiss Region, Norway and Corsica) provide the first tangible opportunity to evaluate these models (Austrheim and Andersen, 2004, Lund and Austrheim, 2003, Obata and Karato, 1995, Jin et al., 1998). This case study focuses on observations based on ultramafic and mafic PST within the Ligurian Ophiolite of the high pressure-low temperature metamorphic (HP-LT) 'Shistes Lustres' complex in Cima di Gratera, Corsica (Andersen et al. 2008). These PST have been preserved in pristine lenses of peridotite and gabbro surrounded by schistose serpentinites. The PST range in thickness from 1mm to 25 cm (Andersen and Austrheim, 2006). Petrography and geochemistry on PST from the peridotite and gabbro samples indicates that total/near-total fusion of the local host rock mineral assemblage occurred; bringing up the temperature of shear zone from 350° C to 1400 - 1700° C; depending on the host rock (Andersen and Austrheim, 2006). The composition of the PST is highly variable, even at the thin section scale and this has been attributed to the coarse-grained nature of the host rock, its small scale inhomogeneity and poor mixing of the fusion melt. Almost all the bulk analyses of the PST are hydrous; the peridotitic PST is always hydrous (H2O content from 3

  17. Continuity of the West Napa–Franklin fault zone inferred from guided waves generated by earthquakes following the 24 August 2014 Mw 6.0 South Napa earthquake

    Catchings, Rufus D.; Goldman, Mark R.; Li, Yong-Gang; Chan, Joanne

    2016-01-01

    We measure peak ground velocities from fault‐zone guided waves (FZGWs), generated by on‐fault earthquakes associated with the 24 August 2014 Mw 6.0 South Napa earthquake. The data were recorded on three arrays deployed across north and south of the 2014 surface rupture. The observed FZGWs indicate that the West Napa fault zone (WNFZ) and the Franklin fault (FF) are continuous in the subsurface for at least 75 km. Previously published potential‐field data indicate that the WNFZ extends northward to the Maacama fault (MF), and previous geologic mapping indicates that the FF extends southward to the Calaveras fault (CF); this suggests a total length of at least 110 km for the WNFZ–FF. Because the WNFZ–FF appears contiguous with the MF and CF, these faults apparently form a continuous Calaveras–Franklin–WNFZ–Maacama (CFWM) fault that is second only in length (∼300  km) to the San Andreas fault in the San Francisco Bay area. The long distances over which we observe FZGWs, coupled with their high amplitudes (2–10 times the S waves) suggest that strong shaking from large earthquakes on any part of the CFWM fault may cause far‐field amplified fault‐zone shaking. We interpret guided waves and seismicity cross sections to indicate multiple upper crustal splays of the WNFZ–FF, including a northward extension of the Southhampton fault, which may cause strong shaking in the Napa Valley and the Vallejo area. Based on travel times from each earthquake to each recording array, we estimate average P‐, S‐, and guided‐wave velocities within the WNFZ–FF (4.8–5.7, 2.2–3.2, and 1.1–2.8  km/s, respectively), with FZGW velocities ranging from 58% to 93% of the average S‐wave velocities.

  18. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection.

    PubMed

    Kasagi, M; Fujita, K; Tsuji, M; Takewaki, I

    2016-02-01

    A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency) earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency) and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  19. The Puerto Rico Seismic Network Broadcast System: A user friendly GUI to broadcast earthquake messages, to generate shakemaps and to update catalogues

    NASA Astrophysics Data System (ADS)

    Velez, J.; Huerfano, V.; von Hillebrandt, C.

    2007-12-01

    The Puerto Rico Seismic Network (PRSN) has historically provided locations and magnitudes for earthquakes in the Puerto Rico and Virgin Islands (PRVI) region. PRSN is the reporting authority for the region bounded by latitudes 17.0N to 20.0N, and longitudes 63.5W to 69.0W. The main objective of the PRSN is to record, process, analyze, provide information and research local, regional and teleseismic earthquakes, providing high quality data and information to be able to respond to the needs of the emergency management, academic and research communities, and the general public. The PRSN runs Earthworm software (Johnson et al, 1995) to acquire and write waveforms to disk for permanent archival. Automatic locations and alerts are generated for events in Puerto Rico, the Intra America Seas, and the Atlantic by the EarlyBird system (Whitmore and Sokolowski, 2002), which monitors PRSN stations as well as some 40 additional stations run by networks operating in North, Central and South America and other sites in the Caribbean. PRDANIS (Puerto Rico Data Analysis and Information System) software, developed by PRSN, supports manual locations and analyst review of automatic locations of events within the PRSN area of responsibility (AOR), using all the broadband, strong-motion and short-period waveforms Rapidly available information regarding the geographic distribution of ground shaking in relation to the population and infrastructure at risk can assist emergency response communities in efficient and optimized allocation of resources following a large earthquake. The ShakeMap system developed by the USGS provides near real-time maps of instrumental ground motions and shaking intensity and has proven effective in rapid assessment of the extent of shaking and potential damage after significant earthquakes (Wald, 2004). In Northern and Southern California, the Pacific Northwest, and the states of Utah and Nevada, ShakeMaps are used for emergency planning and response, loss

  20. Constraints on the rupture process of the 17 August 1999 Izmit earthquake

    NASA Astrophysics Data System (ADS)

    Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.

    2003-04-01

    Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.

  1. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    NASA Astrophysics Data System (ADS)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  2. Stress development in heterogenetic lithosphere: Insights into earthquake processes in the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Zhan, Yan; Hou, Guiting; Kusky, Timothy; Gregg, Patricia M.

    2016-03-01

    The New Madrid Seismic Zone (NMSZ) in the Midwestern United States was the site of several major M 6.8-8 earthquakes in 1811-1812, and remains seismically active. Although this region has been investigated extensively, the ultimate controls on earthquake initiation and the duration of the seismicity remain unclear. In this study, we develop a finite element model for the Central United States to conduct a series of numerical experiments with the goal of determining the impact of heterogeneity in the upper crust, the lower crust, and the mantle on earthquake nucleation and rupture processes. Regional seismic tomography data (CITE) are utilized to infer the viscosity structure of the lithosphere which provide an important input to the numerical models. Results indicate that when differential stresses build in the Central United States, the stresses accumulating beneath the Reelfoot Rift in the NMSZ are highly concentrated, whereas the stresses below the geologically similar Midcontinent Rift System are comparatively low. The numerical observations coincide with the observed distribution of seismicity throughout the region. By comparing the numerical results with three reference models, we argue that an extensive mantle low velocity zone beneath the NMSZ produces differential stress localization in the layers above. Furthermore, the relatively strong crust in this region, exhibited by high seismic velocities, enables the elevated stress to extend to the base of the ancient rift system, reactivating fossil rifting faults and therefore triggering earthquakes. These results show that, if boundary displacements are significant, the NMSZ is able to localize tectonic stresses, which may be released when faults close to failure are triggered by external processes such as melting of the Laurentide ice sheet or rapid river incision.

  3. Kinematic rupture process of the 2014 Chile Mw 8.1 earthquake constrained by strong-motion, GPS static offsets and teleseismic data

    NASA Astrophysics Data System (ADS)

    Liu, Chengli; Zheng, Yong; Wang, Rongjiang; Xiong, Xiong

    2015-08-01

    On 2014 April 1, a magnitude Mw 8.1 interplate thrust earthquake ruptured a densely instrumented region of Iquique seismic gap in northern Chile. The abundant data sets near and around the rupture zone provide a unique opportunity to study the detailed source process of this megathrust earthquake. We retrieved the spatial and temporal distributions of slip during the main shock and one strong aftershock through a joint inversion of teleseismic records, GPS offsets and strong motion data. The main shock rupture initiated at a focal depth of about 25 km and propagated around the hypocentre. The peak slip amplitude in the model is ˜6.5 m, located in the southeast of the hypocentre. The major slip patch is located around the hypocentre, spanning ˜150 km along dip and ˜160 km along strike. The associated static stress drop is ˜3 MPa. Most of the seismic moment was released within 150 s. The total seismic moment of our preferred model is 1.72 × 1021 N m, equivalent to Mw 8.1. For the strong aftershock on 2014 April 3, the slip mainly occurred in a relatively compact area, and the major slip area surrounded the hypocentre with the peak amplitude of ˜2.5 m. There is a secondary slip patch located downdip from the hypocentre with the peak slip of ˜2.1 m. The total seismic moment is about 3.9 × 1020 N m, equivalent to Mw 7.7. Between the rupture areas of the main shock and the 2007 November 14 Mw 7.7 Antofagasta, Chile earthquake, there is an earthquake vacant zone with a total length of about 150 km. Historically, if there is no big earthquake or obvious aseismic creep occurring in this area, it has a great potential of generating strong earthquakes with magnitude larger than Mw 7.0 in the future.

  4. Incorporation of experimentally derived friction laws in numerical simulations of earthquake generated tsunamis

    NASA Astrophysics Data System (ADS)

    Murphy, Shane; Spagnuolo, Elena; Lorito, Stefano; Di Toro, Giulio; Scala, Antonio; Festa, Gaetano; Nielsen, Stefan; Piatanesi, Alessio; Romano, Fabrizio; Aretusini, Stefano

    2016-04-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth. For example nucleation and high frequency radiation generally occur at depth but low frequency radiation and large tsunami-genic slip appear to occur in the shallow crustal depth. Numerical simulations used to describe these features predominantly use standardised theoretical equations or experimental observations often assuming that their validity extends to all slip-rates, lithologies and tectonic environments. However recent rotary-shear experiments performed on a range of diverse materials and experimental conditions highlighted the large variability of the evolution of friction during slipping pointing to a more complex relationship between material type, slip rate and normal stress. Simulating dynamic rupture using a 2D spectral element methodology on a Tohoku like fault, we apply experimentally derived friction laws (i.e. thermal slip distance friction law, Di Toro et al. 2011) Choice of parameters for the friction law are based on expected material type (e.g. cohesive and non-cohesive clay rich material representative of an accretionary wedge), the normal stress which is controlled by the interaction between the regional stress field and the fault geometry. The shear stress distribution on the fault plane is fractal with the yield stress dependent on the static coefficient of friction and the normal stress, parameters that are dependent on the material type and geometry. We use metrics such as the slip distribution, ground motion and fracture energy to explore the effect of frictional behaviour, fault geometry and stress perturbations and its potential role in tsunami generation. Preliminary results will be presented. This research is funded by the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction

  5. Assessment of Vegetation Destruction Due to Wenchuan Earthquake and Its Recovery Process Using MODIS Data

    NASA Astrophysics Data System (ADS)

    Zou, Z.; Xiao, X.

    2015-12-01

    With a high temporal resolution and a large covering area, MODIS data are particularly useful in assessing vegetation destruction and recovery of a wide range of areas. In this study, MOD13Q1 data of the growing season (Mar. to Nov.) are used to calculate the Maximum NDVI (NDVImax) of each year. This study calculates each pixel's mean and standard deviation of the NDVImaxs in the 8 years before the earthquake. If the pixel's NDVImax of 2008 is two standard deviation smaller than the mean NDVImax, this pixel is detected as a vegetation destructed pixel. For each vegetation destructed pixel, its similar pixels of the same vegetation type are selected within the latitude difference of 0.5 degrees, altitude difference of 100 meters and slope difference of 3 degrees. Then the NDVImax difference of each vegetation destructed pixel and its similar pixels are calculated. The 5 similar pixels with the smallest NDVImax difference in the 8 years before the earthquake are selected as reference pixels. The mean NDVImaxs of these reference pixels after the earthquake are calculated and serve as the criterion to assess the vegetation recovery process.

  6. Source processes of industrially-induced earthquakes at the Geysers geothermal area, California

    Ross, A.; Foulger, G.R.; Julian, B.R.

    1999-01-01

    Microearthquake activity at The Geysers geothermal area, California, mirrors the steam production rate, suggesting that the earthquakes are industrially induced. A 15-station network of digital, three-component seismic stations was operated for one month in 1991, and 3,900 earthquakes were recorded. Highly-accurate moment tensors were derived for 30 of the best recorded earthquakes by tracing rays through tomographically derived 3-D VP and VP / VS structures, and inverting P-and S-wave polarities and amplitude ratios. The orientations of the P-and T-axes are very scattered, suggesting that there is no strong, systematic deviatoric stress field in the reservoir, which could explain why the earthquakes are not large. Most of the events had significant non-double-couple (non-DC) components in their source mechanisms with volumetric components up to ???30% of the total moment. Explosive and implosive sources were observed in approximately equal numbers, and must be caused by cavity creation (or expansion) and collapse. It is likely that there is a causal relationship between these processes and fluid reinjection and steam withdrawal. Compensated linear vector dipole (CLVD) components were up to 100% of the deviatoric component. Combinations of opening cracks and shear faults cannot explain all the observations, and rapid fluid flow may also be involved. The pattern of non-DC failure at The Geysers contrasts with that of the Hengill-Grensdalur area in Iceland, a largely unexploited water-dominated field in an extensional stress regime. These differences are poorly understood but may be linked to the contrasting regional stress regimes and the industrial exploitation at The Geysers.

  7. Orientation damage in the Christchurch cemeteries generated during the Christchurch earthquakes of 2010

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel; Perez-Lopez, Raul; Rodrigez-Pascua, Miguel Angel; Martin-Velazquez, Silvia

    2014-05-01

    The intensity scales determined the damage caused by an earthquake. However, a new methodology takes into account not only the damage but the type of damage "Earthquake Archaeological Effects" EAE's, and its orientation (e.g. displaced masonry blocks, impact marks, conjugated fractures, fallen and oriented columns, dipping broken corners, etc.). It focuses not only on the amount of damage but also in its orientation, giving information about the ground motion during the earthquake. In 2010 an earthquake of magnitude 6.2 took place in Christchurch (New Zealand) (22-2-2010), 185 casualties, making it the second-deadliest natural disaster in New Zealand. Due to the magnitude of the catastrophe, the city centre (CBD) was closed and the most damaged buildings were closed and later demolished. For this reason it could not be possible to access to sampling or make observations in the most damaged areas. However, the cemeteries were not closed and a year later still remained intact since the financial means to recover were used to reconstruct infrastructures and housing the city. This peculiarity of the cemeteries made measures of the earthquake effects possible. Orientation damage was measured on the tombs, crosses and headstones of the cemeteries (mainly on falling objects such as fallen crosses, obelisks, displaced tombstones, etc.). 140 data were taken in the most important cemeteries (Barbadoes, Addington, Pebleton, Woodston, Broomley and Linwood cemeteries) covering much of the city area. The procedure involved two main phases: a) inventory and identification of damages, and b) analysis of the damage orientations. The orientation was calculated for each element and plotted in a map and statistically in rose diagrams. The orientation dispersion is high in some cemeteries but damage orientation S-N and E-W is observed. However, due to the multiple seismogenic faults responsible for earthquakes and damages in Christchurch during the year after the 2010 earthquake, a

  8. A rare moderate‐sized (Mw 4.9) earthquake in Kansas: Rupture process of the Milan, Kansas, earthquake of 12 November 2014 and its relationship to fluid injection

    Choy, George; Rubinstein, Justin L.; Yeck, William; McNamara, Daniel E.; Mueller, Charles; Boyd, Oliver

    2016-01-01

    The largest recorded earthquake in Kansas occurred northeast of Milan on 12 November 2014 (Mw 4.9) in a region previously devoid of significant seismic activity. Applying multistation processing to data from local stations, we are able to detail the rupture process and rupture geometry of the mainshock, identify the causative fault plane, and delineate the expansion and extent of the subsequent seismic activity. The earthquake followed rapid increases of fluid injection by multiple wastewater injection wells in the vicinity of the fault. The source parameters and behavior of the Milan earthquake and foreshock–aftershock sequence are similar to characteristics of other earthquakes induced by wastewater injection into permeable formations overlying crystalline basement. This earthquake also provides an opportunity to test the empirical relation that uses felt area to estimate moment magnitude for historical earthquakes for Kansas.

  9. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  10. Evidences of landslide earthquake triggering due to self-excitation process

    NASA Astrophysics Data System (ADS)

    Bozzano, F.; Lenti, L.; Martino, Salvatore; Paciello, A.; Scarascia Mugnozza, G.

    2011-06-01

    The basin-like setting of stiff bedrock combined with pre-existing landslide masses can contribute to seismic amplifications in a wide frequency range (0-10 Hz) and induce a self-excitation process responsible for earthquake-triggered landsliding. Here, the self-excitation process is proposed to justify the far-field seismic trigger of the Cerda landslide (Sicily, Italy) which was reactivated by the 6th September 2002 Palermo earthquake ( M s = 5.4), about 50 km far from the epicentre. The landslide caused damage to farm houses, roads and aqueducts, close to the village of Cerda, and involved about 40 × 106 m3 of clay shales; the first ground cracks due to the landslide movement formed about 30 min after the main shock. A stress-strain dynamic numerical modelling, performed by FDM code FLAC 5.0, supports the notion that the combination of local geological setting and earthquake frequency content played a fundamental role in the landslide reactivation. Since accelerometric records of the triggering event are not available, dynamic equivalent inputs have been used for the numerical modelling. These inputs can be regarded as representative for the local ground shaking, having a PGA value up to 0.2 m/s2, which is the maximum expected in 475 years, according to the Italian seismic hazard maps. A 2D numerical modelling of the seismic wave propagation in the Cerda landslide area was also performed; it pointed out amplification effects due to both the structural setting of the stiff bedrock (at about 1 Hz) and the pre-existing landslide mass (in the range 3-6 Hz). The frequency peaks of the resulting amplification functions ( A( f)) fit well the H/ V spectral ratios from ambient noise and the H/ H spectral ratios to a reference station from earthquake records, obtained by in situ velocimetric measurements. Moreover, the Fourier spectra of earthquake accelerometric records, whose source and magnitude are consistent with the triggering event, show a main peak at about 1 Hz

  11. An Integrated Monitoring System of Pre-earthquake Processes in Peloponnese, Greece

    NASA Astrophysics Data System (ADS)

    Karastathis, V. K.; Tsinganos, K.; Kafatos, M.; Eleftheriou, G.; Ouzounov, D.; Mouzakiotis, E.; Papadopoulos, G. A.; Voulgaris, N.; Bocchini, G. M.; Liakopoulos, S.; Aspiotis, T.; Gika, F.; Tselentis, A.; Moshou, A.; Psiloglou, B.

    2017-12-01

    One of the controversial issues in the contemporary seismology is the ability of radon accumulation monitoring to provide reliable earthquake forecasting. Although there are many examples in the literature showing radon increase before earthquakes, skepticism arises from instability of the measurements, false alarms, difficulties in interpretation caused by the weather influence (eg. rainfall) and difficulties on the consideration an irrefutable theoretical background of the phenomenon.We have developed and extensively tested a multi parameter network aimed for studying of the pre-earthquake processes and operating as a part of integrated monitoring system in the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The prototype consists of four components: A real-time monitoring system of Radon accumulation. It consists of three gamma radiation detectors [NaI(Tl) scintillators] A nine-station seismic array to monitor the microseismicity in the offshore area of the Hellenic arc. The processing of the data is based on F-K and beam-forming techniques. Real-time weather monitoring systems for air temperature, relative humidity, precipitation and pressure. Thermal radiation emission from AVHRR/NOAA-18 polar orbit satellite observation. The project revolved around the idea of jointly studying the emission of Radon that has been proven in many cases as a reliable indicator of the possible time of an event, with the accurate location of the foreshock activity detected by the seismic array that can be a more reliable indicator of the possible position of an event. In parallel a satellite thermal anomaly detection technique has been used for monitoring of larger magnitude events (possible indicator for strong events M ≥5.0.). The first year of operations revealed a number of pre-seismic radon variation anomalies before several local earthquakes (M>3.6). The Radon increases systematically before the larger events.Details about the overall performance

  12. Universal portfolios generated by weakly stationary processes

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2014-12-01

    Recently, a universal portfolio generated by a set of independent Brownian motions where a finite number of past stock prices are weighted by the moments of the multivariate normal distribution is introduced and studied. The multivariate normal moments as polynomials in time consequently lead to a constant rebalanced portfolio depending on the drift coefficients of the Brownian motions. For a weakly stationary process, a different type of universal portfolio is proposed where the weights on the stock prices depend only on the time differences of the stock prices. An empirical study is conducted on the returns achieved by the universal portfolios generated by the Ornstein-Uhlenbeck process on selected stock-price data sets. Promising results are demonstrated for increasing the wealth of the investor by using the weakly-stationary-process-generated universal portfolios.

  13. The Geodetic Signature of the Earthquake Cycle at Subduction Zones: Model Constraints on the Deep Processes

    NASA Astrophysics Data System (ADS)

    Govers, R.; Furlong, K. P.; van de Wiel, L.; Herman, M. W.; Broerse, T.

    2018-03-01

    Recent megathrust events in Tohoku (Japan), Maule (Chile), and Sumatra (Indonesia) were well recorded. Much has been learned about the dominant physical processes in megathrust zones: (partial) locking of the plate interface, detailed coseismic slip, relocking, afterslip, viscoelastic mantle relaxation, and interseismic loading. These and older observations show complex spatial and temporal patterns in crustal deformation and displacement, and significant differences among different margins. A key question is whether these differences reflect variations in the underlying processes, like differences in locking, or the margin geometry, or whether they are a consequence of the stage in the earthquake cycle of the margin. Quantitative models can connect these plate boundary processes to surficial and far-field observations. We use relatively simple, cyclic geodynamic models to isolate the first-order geodetic signature of the megathrust cycle. Coseismic and subsequent slip on the subduction interface is dynamically (and consistently) driven. A review of global preseismic, coseismic, and postseismic geodetic observations, and of their fit to the model predictions, indicates that similar physical processes are active at different margins. Most of the observed variability between the individual margins appears to be controlled by their different stages in the earthquake cycle. The modeling results also provide a possible explanation for observations of tensile faulting aftershocks and tensile cracking of the overriding plate, which are puzzling in the context of convergence/compression. From the inversion of our synthetic GNSS velocities we find that geodetic observations may incorrectly suggest weak locking of some margins, for example, the west Aleutian margin.

  14. Spatial and size distributions of garnets grown in a pseudotachylyte generated during a lower crust earthquake

    NASA Astrophysics Data System (ADS)

    Clerc, Adriane; Renard, François; Austrheim, Håkon; Jamtveit, Bjørn

    2018-05-01

    In the Bergen Arc, western Norway, rocks exhumed from the lower crust record earthquakes that formed during the Caledonian collision. These earthquakes occurred at about 30-50 km depth under granulite or amphibolite facies metamorphic conditions. Coseismic frictional heating produced pseudotachylytes in this area. We describe pseudotachylytes using field data to infer earthquake magnitude (M ≥ 6.6), low dynamic friction during rupture propagation (μd < 0.1) and laboratory analyses to infer fast crystallization of microlites in the pseudotachylyte, within seconds of the earthquake arrest. High resolution 3D X-ray microtomography imaging reveals the microstructure of a pseudotachylyte sample, including numerous garnets and their corona of plagioclase that we infer have crystallized in the pseudotachylyte. These garnets 1) have dendritic shapes and are surrounded by plagioclase coronae almost fully depleted in iron, 2) have a log-normal volume distribution, 3) increase in volume with increasing distance away from the pseudotachylyte-host rock boundary, and 4) decrease in number with increasing distance away from the pseudotachylyte -host rock boundary. These characteristics indicate fast mineral growth, likely within seconds. We propose that these new quantitative criteria may assist in the unambiguous identification of pseudotachylytes in the field.

  15. Towards automatic planning for manufacturing generative processes

    SciT

    CALTON,TERRI L.

    2000-05-24

    Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from themore » original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.« less

  16. Analysis of the tsunami generated by the MW 7.8 1906 San Francisco earthquake

    Geist, E.L.; Zoback, M.L.

    1999-01-01

    We examine possible sources of a small tsunami produced by the 1906 San Francisco earthquake, recorded at a single tide gauge station situated at the opening to San Francisco Bay. Coseismic vertical displacement fields were calculated using elastic dislocation theory for geodetically constrained horizontal slip along a variety of offshore fault geometries. Propagation of the ensuing tsunami was calculated using a shallow-water hydrodynamic model that takes into account the effects of bottom friction. The observed amplitude and negative pulse of the first arrival are shown to be inconsistent with small vertical displacements (~4-6 cm) arising from pure horizontal slip along a continuous right bend in the San Andreas fault offshore. The primary source region of the tsunami was most likely a recently recognized 3 km right step in the San Andreas fault that is also the probable epicentral region for the 1906 earthquake. Tsunami models that include the 3 km right step with pure horizontal slip match the arrival time of the tsunami, but underestimate the amplitude of the negative first-arrival pulse. Both the amplitude and time of the first arrival are adequately matched by using a rupture geometry similar to that defined for the 1995 MW (moment magnitude) 6.9 Kobe earthquake: i.e., fault segments dipping toward each other within the stepover region (83??dip, intersecting at 10 km depth) and a small component of slip in the dip direction (rake=-172??). Analysis of the tsunami provides confirming evidence that the 1906 San Francisco earthquake initiated at a right step in a right-lateral fault and propagated bilaterally, suggesting a rupture initiation mechanism similar to that for the 1995 Kobe earthquake.

  17. A Bayesian analysis of the 2016 Pedernales (Ecuador) earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Gombert, B.; Duputel, Z.; Jolivet, R.; Rivera, L. A.; Simons, M.; Jiang, J.; Liang, C.; Fielding, E. J.

    2017-12-01

    The 2016 Mw = 7.8 Pedernales earthquake is the largest event to strike Ecuador since 1979. Long period W-phase and Global CMT solutions suggest that slip is not perpendicular to the trench axis, in agreement with the convergence obliquity of the Ecuadorian subduction. In this study, we propose a new co-seismic kinematic slip model obtained from the joint inversion of multiple observations in an unregularized and fully Bayesian framework. We use a comprehensive static dataset composed of several InSAR scenes, GPS static offsets, and tsunami waveforms from two nearby DART stations. The kinematic component of the rupture process is constrained by an extensive network of High-Rate GPS and accelerometers. Our solution includes the ensemble of all plausible models that are consistent with our prior information and fit the available observations within data and prediction uncertainties. We analyse the source process in light of the historical seismicity, in particular the Mw = 7.8 1942 earthquake for which the rupture extent overlaps with the 2016 event. In addition, we conduct a probabilistic comparison of co-seismic slip with a stochastic interseismic coupling model obtained from GPS data, putting a light on the processes at play within the Ecuadorian subduction margin.

  18. An integrated observational site for monitoring pre-earthquake processes in Peloponnese, Greece. Preliminary results.

    NASA Astrophysics Data System (ADS)

    Tsinganos, Kanaris; Karastathis, Vassilios K.; Kafatos, Menas; Ouzounov, Dimitar; Tselentis, Gerassimos; Papadopoulos, Gerassimos A.; Voulgaris, Nikolaos; Eleftheriou, Georgios; Mouzakiotis, Evangellos; Liakopoulos, Spyridon; Aspiotis, Theodoros; Gika, Fevronia; E Psiloglou, Basil

    2017-04-01

    We are presenting the first results of developing a new integrated observational site in Greece to study pre-earthquake processes in Peloponnese, lead by the National Observatory of Athens. We have developed a prototype of multiparameter network approach using an integrated system aimed at monitoring and thorough studies of pre-earthquake processes at the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The initial prototype of the new observational systems consists of: (1) continuous real-time monitoring of Radon accumulation in the ground through a network of radon sensors, consisting of three gamma radiation detectors [NaI(Tl) scintillators], (2) nine-station seismic array installed to detect and locate events of low magnitude (less than 1.0 R) in the offshore area of the Hellenic arc, (3) real-time weather monitoring systems (air temperature, relative humidity, precipitation, pressure) and (4) satellite thermal radiation from AVHRR/NOAA-18 polar orbit sensing. The first few moths of operations revealed a number of pre-seismic radon variation anomalies before several earthquakes (M>3.6). The radon increases systematically before the larger events. For example a radon anomaly was predominant before the event of Sep 28, M 5.0 (36.73°N, 21.87°E), 18 km ESE of Methoni. The seismic array assists in the evaluation of current seismicity and may yield identification of foreshock activity. Thermal anomalies in satellite images are also examined as an additional tool for evaluation and verification of the Radon increase. According to the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) concept, atmospheric thermal anomalies observed before large seismic events are associated with the increase of Radon concentration on the ground. Details about the integrating ground and space observations, overall performance of the observational sites, future plans in advancing the cooperation in observations will be discussed.

  19. Performance of Irikura Recipe Rupture Model Generator in Earthquake Ground Motion Simulations with Graves and Pitarka Hybrid Approach

    NASA Astrophysics Data System (ADS)

    Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur

    2017-09-01

    We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85-104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095-2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0-20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1-10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1-3 s where the IM2011 motions are about 20-30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1-3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of

  20. Performance of Irikura recipe rupture model generator in earthquake ground motion simulations with Graves and Pitarka hybrid approach

    Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur

    2017-01-01

    We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85–104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095–2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0–20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1–3 s where the IM2011 motions are about 20–30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1–3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is

  1. Kinematic Source Rupture Process of the 2008 Iwate-Miyagi Nairiku Earthquake, a MW6.9 thrust earthquake in northeast Japan, using Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2008-12-01

    The 2008 Iwate-Miyagi Nairiku earthquake (MJMA7.2) on June 14, 2008, is a thrust type inland crustal earthquake, which occurred in northeastern Honshu, Japan. In order to see strong motion generation process of this event, the source rupture process is estimated by the kinematic waveform inversion using strong motion data. Strong motion data of the K-NET and KiK-net stations and Aratozawa Dam are used. These stations are located 3-94 km from the epicenter. Original acceleration time histories are integrated into velocity and band- pass filtered between 0.05 and 1 Hz. For obtaining the detailed source rupture process, appropriate velocity structure model for Green's functions should be used. We estimated one dimensional velocity structure model for each strong motion station by waveform modeling of aftershock records. The elastic wave velocity, density, and Q-values for four sedimentary layers are assumed following previous studies. The thickness of each sedimentary layer depends on the station, which is estimated to fit the observed aftershock's waveforms by the optimization using the genetic algorithm. A uniform layered structure model is assumed for crust and upper mantle below the seismic bedrock. We succeeded to get a reasonable velocity structure model for each station to give a good fit of the main S-wave part in the observation of aftershocks. The source rupture process of the mainshock is estimated by the linear kinematic waveform inversion using multiple time windows (Hartzell and Heaton, 1983). A fault plane model is assumed following the moment tensor solution by F-net, NIED. The strike and dip angle is 209° and 51°, respectively. The rupture starting point is fixed at the hypocenter located by the JMA. The obtained source model shows a large slip area in the shallow portion of the fault plane approximately 6 km southwest of the hypocenter. The rupture of the asperity finishes within about 9 s. This large slip area corresponds to the area with surface

  2. Source process of the MW7.8 2016 Kaikoura earthquake in New Zealand and the characteristics of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zang, Y.; Zhou, L.; Han, Y.

    2017-12-01

    The MW7.8 New Zealand earthquake of 2016 occurred near the Kaikoura area in the South Island, New Zealand with the epicenter of 173.13°E and 42.78°S. The MW7.8 Kaikoura earthquake occurred on the transform boundary faults between the Pacific plate and the Australian plate and with the thrust focal mechanism solution. The Kaikoura earthquake is a complex event because the significant difference, especially between the magnitude, seismic moment, radiated energy and the casualties. Only two people were killed, and twenty people injured and no more than twenty buildings are destroyed during this earthquake, the damage level is not so severe in consideration about the huge magnitude. We analyzed the rupture process according to the source parameters, it can be confirmed that the radiated energy and the apparent stress of the Kaikoura earthquake are small and minor. The results indicate a frictional overshoot behavior in the dynamic source process of Kaikoura earthquake, which is actually with sufficient rupture and more affluent moderate aftershocks. It is also found that the observed horizontal Peak Ground Acceleration of the strong ground motion is generally small comparing with the Next Generation Attenuation relationship. We further studied the characteristics of the observed horizontal PGAs at the 6 near fault stations, which are located in the area less than 10 km to the main fault. The relatively high level strong ground motion from the 6 stations may be produced by the higher slip around the asperity area rather than the initial rupture position on the main plane. Actually, the huge surface displacement at the northern of the rupture fault plane indicated why aftershocks are concentrated in the north. And there are more damage in Wellington than in Christchurch, even which is near the south of the epicenter. In conclusion, the less damage level of Kaikoura earthquake in New Zealand may probably because of the smaller strong ground motion and the rare

  3. Rupture, waves and earthquakes.

    PubMed

    Uenishi, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  4. Rupture, waves and earthquakes

    PubMed Central

    UENISHI, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but “extraordinary” phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable. PMID:28077808

  5. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences

  6. Determination of source process and the tsunami simulation of the 2013 Santa Cruz earthquake

    NASA Astrophysics Data System (ADS)

    Park, S. C.; Lee, J. W.; Park, E.; Kim, S.

    2014-12-01

    In order to understand the characteristics of large tsunamigenic earthquakes, we analyzed the earthquake source process of the 2013 Santa Cruz earthquake and simulated the following tsunami. We first estimated the fault length of about 200 km using 3-day aftershock distribution and the source duration of about 110 seconds using the duration of high-frequency energy radiation (Hara, 2007). Moment magnitude was estimated to be 8.0 using the formula of Hara (2007). From the results of 200 km of fault length and 110 seconds of source duration, we used the initial value of rupture velocity as 1.8 km/s for teleseismic waveform inversions. Teleseismic body wave inversion was carried out using the inversion package by Kikuchi and Kanamori (1991). Teleseismic P waveform data from 14 stations were used and band-pass filter of 0.005 ~ 1 Hz was applied. Our best-fit solution indicated that the earthquake occurred on the northwesterly striking (strike = 305) and shallowly dipping (dip = 13) fault plane. Focal depth was determined to be 23 km indicating shallow event. Moment magnitude of 7.8 was obtained showing somewhat smaller than the result obtained above and that of previous study (Lay et al., 2013). Large slip area was seen around the hypocenter. Using the slip distribution obtained by teleseismic waveform inversion, we calculated the surface deformations using formulas of Okada (1985) assuming as the initial change of sea water by tsunami. Then tsunami simulation was carred out using Conell Multi-grid Coupled Tsunami Model (COMCOT) code and 1 min-grid topographic data for water depth from the General Bathymetric Chart of the Ocenas (GEBCO). According to the tsunami simulation, most of tsunami waves propagated to the directions of southwest and northeast which are perpendicular to the fault strike. DART buoy data were used to verify our simulation. In the presentation, we will discuss more details on the results of source process and tsunami simulation and compare them

  7. Geologic Inheritance and Earthquake Rupture Processes: The 1905 M ≥ 8 Tsetserleg-Bulnay Strike-Slip Earthquake Sequence, Mongolia

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Hyuck; Klinger, Yann; Ferry, Matthieu; Ritz, Jean-François; Kurtz, Robin; Rizza, Magali; Bollinger, Laurent; Davaasambuu, Battogtokh; Tsend-Ayush, Nyambayar; Demberel, Sodnomsambuu

    2018-02-01

    In 1905, 14 days apart, two M 8 continental strike-slip earthquakes, the Tsetserleg and Bulnay earthquakes, occurred on the Bulnay fault system, in Mongolia. Together, they ruptured four individual faults, with a total length of 676 km. Using submetric optical satellite images "Pleiades" with ground resolution of 0.5 m, complemented by field observation, we mapped in detail the entire surface rupture associated with this earthquake sequence. Surface rupture along the main Bulnay fault is 388 km in length, striking nearly E-W. The rupture is formed by a series of fault segments that are 29 km long on average, separated by geometric discontinuities. Although there is a difference of about 2 m in the average slip between the western and eastern parts of the Bulnay rupture, along-fault slip variations are overall limited, resulting in a smooth slip distribution, except for local slip deficit at segment boundaries. We show that damage, including short branches and secondary faulting, associated with the rupture propagation, occurred significantly more often along the western part of the Bulnay rupture, while the eastern part of the rupture appears more localized and thus possibly structurally simpler. Eventually, the difference of slip between the western and eastern parts of the rupture is attributed to this difference of rupture localization, associated at first order with a lateral change in the local geology. Damage associated to rupture branching appears to be located asymmetrically along the extensional side of the strike-slip rupture and shows a strong dependence on structural geologic inheritance.

  8. Seismic rupture process of the 2010 Haiti Earthquake (Mw7.0) inferred from seismic and SAR data

    NASA Astrophysics Data System (ADS)

    Santos, Rúben; Caldeira, Bento; Borges, José; Bezzeghoud, Mourad

    2013-04-01

    On January 12th 2010 at 21:53, the Port-au-Prince - Haiti region was struck by an Mw7 earthquake, the second most deadly of the history. The last seismic significant events in the region occurred in November 1751 and June 1770 [1]. Geodetic and geological studies, previous to the 2010 earthquake [2] have warned to the potential of the destructive seismic events in that region and this event has confirmed those warnings. Some aspects of the source of this earthquake are nonconsensual. There is no agreement in the mechanism of rupture or correlation with the fault that should have it generated [3]. In order to better understand the complexity of this rupture, we combined several techniques and data of different nature. We used teleseismic body-wave and Synthetic Aperture Radar data (SAR) based on the following methodology: 1) analysis of the rupture process directivity [4] to determine the velocity and direction of rupture; 2) teleseismic body-wave inversion to obtain the spatiotemporal fault slip distribution and a detailed rupture model; 3) near field surface deformation modeling using the calculated seismic rupture model and compared with the measured deformation field using SAR data of sensor Advanced Land Observing Satellite - Phased Array L-band SAR (ALOS-PALSAR). The combined application of seismic and geodetic data reveals a complex rupture that spread during approximately 12s mainly from WNW to ESE with average velocity of 2,5km/s, on a north-dipping fault plane. Two main asperities are obtained: the first (and largest) occurs within the first ~ 5sec and extends for approximately 6km around the hypocenter; the second one, that happens in the remaining 6s, covers a near surface rectangular strip with about 12km long by 3km wide. The first asperity is compatible with a left lateral strike-slip motion with a small reverse component; the mechanism of second asperity is predominantly reverse. The obtained rupture process allows modeling a coseismic deformation

  9. Charles Darwin's earthquake reports

    NASA Astrophysics Data System (ADS)

    Galiev, Shamil

    2010-05-01

    problems which began to discuss only during the last time. Earthquakes often precede volcanic eruptions. According to Darwin, the earthquake-induced shock may be a common mechanism of the simultaneous eruptions of the volcanoes separated by long distances. In particular, Darwin wrote that ‘… the elevation of many hundred square miles of territory near Concepcion is part of the same phenomenon, with that splashing up, if I may so call it, of volcanic matter through the orifices in the Cordillera at the moment of the shock;…'. According to Darwin the crust is a system where fractured zones, and zones of seismic and volcanic activities interact. Darwin formulated the task of considering together the processes studied now as seismology and volcanology. However the difficulties are such that the study of interactions between earthquakes and volcanoes began only recently and his works on this had relatively little impact on the development of geosciences. In this report, we discuss how the latest data on seismic and volcanic events support the Darwin's observations and ideas about the 1835 Chilean earthquake. The material from researchspace. auckland. ac. nz/handle/2292/4474 is used. We show how modern mechanical tests from impact engineering and simple experiments with weakly-cohesive materials also support his observations and ideas. On the other hand, we developed the mathematical theory of the earthquake-induced catastrophic wave phenomena. This theory allow to explain the most important aspects the Darwin's earthquake reports. This is achieved through the simplification of fundamental governing equations of considering problems to strongly-nonlinear wave equations. Solutions of these equations are constructed with the help of analytic and numerical techniques. The solutions can model different strongly-nonlinear wave phenomena which generate in a variety of physical context. A comparison with relevant experimental observations is also presented.

  10. Using SW4 for 3D Simulations of Earthquake Strong Ground Motions: Application to Near-Field Strong Motion, Building Response, Basin Edge Generated Waves and Earthquakes in the San Francisco Bay Are

    NASA Astrophysics Data System (ADS)

    Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.

    2016-12-01

    Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.

  11. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  12. Rupture process of the M 7.9 Denali fault, Alaska, earthquake: Subevents, directivity, and scaling of high-frequency ground motions

    Frankel, A.

    2004-01-01

    Displacement waveforms and high-frequency acceleration envelopes from stations at distances of 3-300 km were inverted to determine the source process of the M 7.9 Denali fault earthquake. Fitting the initial portion of the displacement waveforms indicates that the earthquake started with an oblique thrust subevent (subevent # 1) with an east-west-striking, north-dipping nodal plane consistent with the observed surface rupture on the Susitna Glacier fault. Inversion of the remainder of the waveforms (0.02-0.5 Hz) for moment release along the Denali and Totschunda faults shows that rupture proceeded eastward on the Denali fault, with two strike-slip subevents (numbers 2 and 3) centered about 90 and 210 km east of the hypocenter. Subevent 2 was located across from the station at PS 10 (Trans-Alaska Pipeline Pump Station #10) and was very localized in space and time. Subevent 3 extended from 160 to 230 km east of the hypocenter and had the largest moment of the subevents. Based on the timing between subevent 2 and the east end of subevent 3, an average rupture velocity of 3.5 km/sec, close to the shear wave velocity at the average rupture depth, was found. However, the portion of the rupture 130-220 km east of the epicenter appears to have an effective rupture velocity of about 5.0 km/ sec, which is supershear. These two subevents correspond approximately to areas of large surface offsets observed after the earthquake. Using waveforms of the M 6.7 Nenana Mountain earthquake as empirical Green's functions, the high-frequency (1-10 Hz) envelopes of the M 7.9 earthquake were inverted to determine the location of high-frequency energy release along the faults. The initial thrust subevent produced the largest high-frequency energy release per unit fault length. The high-frequency envelopes and acceleration spectra (>0.5 Hz) of the M 7.9 earthquake can be simulated by chaining together rupture zones of the M 6.7 earthquake over distances from 30 to 180 km east of the

  13. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  14. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  15. A combined source and site-effect study of ground motions generated by an earthquake in Port au Prince (Haiti)

    NASA Astrophysics Data System (ADS)

    St Fleur, Sadrac; Courboulex, Francoise; Bertrand, Etienne; Deschamps, Anne; Mercier de Lepinay, Bernard; Prepetit, Claude; Hough, Suzan

    2013-04-01

    We present the preliminary results of a study with the aim of understanding how some combinations of source and site effects can generate extreme ground motions in the city of Port au Prince. For this study, we have used the recordings of several tens of earthquakes with magnitude larger than 3.0 at 3 to 14 stations from three networks: 3 stations of the Canadian Broad-band network (RNCan), 2 stations of the educational French network (SaE) and 9 stations of the accelerometric network (Bureau des Mines et de l'Energie of Port au Prince and US Geological survey). In order to estimate site effects under each station, we have applied classical spectral ratio methods: The H/V (Horizontal/Vertical) method was first used to select a reference station, which was itself used in a site/reference method. Because a true reference station was not available, we have used successively stations HCEA, then station PAPH, then an average value of 3 stations. In the frequency range studied (0.5 - 20 Hz), we found a site-to-reference ratio up to 3 to 8. However, these values present a large variability, depending on the earthquake recordings. This may indicate that the observed amplification from one station to the other depends not only from the local site effect but also from the source. We then used the same earthquake recordings as Empirical Green's Functions (EGF) in order to simulate the ground motions generated by a virtual earthquake. For this simulation, we have used a stochastic EGF summation method. We have worked on the simulation of a magnitude Mw=6.8 using successively 2 smaller events that occurred on the Leogane fault as EGF. The results obtained using the two events are surprisingly very different. Using the first EGF, we obtained almost the same ground motion values at each station in Port au Prince, whereas with the second EGF, the results highlight large differences. The large variability obtained in the results indicates that a particular combination of site and

  16. A new perspective on the generation of the 2016 M6.7 Kaohsiung earthquake, southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Zhi

    2017-04-01

    In order to investigate the likely generation mechanism of the 2016 M6.7 Kaohsiung earthquake, a large number of high-quality travel times from P- and S-wave source-receiver pairs are used jointly in this study to invert three-dimensional (3-D) seismic velocity (Vp, Vs) and Poisson's ratio structures at high resolution. We also calculated crack density, saturate fracture, and bulk-sound velocity from our inverted Vp, Vs, and σgodels. In this way, multi-geophysical parameter imaging revealed that the 2016 Kaohsiung earthquake occurred along a distinctive edge portion exhibiting high-to-low variations in these parameters in both horizontal and vertical directions across the hypocenter. We consider that a slow velocity and high-σ body that has high ɛ and somewhat high ζ anomalies above the hypocenter under the Coastal Plain represents fluids contained in the young fold-and-thrust belt associated with the passive Asian continental margin in southwestern Taiwan. Intriguing, a continuous low Vp and Vs zone with high Poisson's ratio, crack density and saturate fracturegnomalies across the Laonung and Chishan faults is also clearly imaged in the northwestern upper crust beneath the Coastal Plain and Western Foothills as far as the southeastern lower crust under the Central Range. We therefore propose that this southeastern extending weakened zone was mainly the result of a fluid intrusion either from the young fold-and-thrust belt the shallow crust or the subducted Eurasian continental (EC) plate in the lower crust and uppermost mantle. We suggest that fluid intrusion into the upper Oligocene to Pleistocene shallow marine and clastic shelf units of the Eurasian continental crust and/or the relatively thin uppermost part of the transitional Pleistocene-Holocene foreland due to the subduction of the EC plate along the deformation front played a key role in earthquake generation in southwestern Taiwan. Such fluid penetration would reduce Vp, and Vs while increasing

  17. A new perspective on the generation of the 2016 M6.4 Meilung earthquake, southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2017-12-01

    In order to investigate the likely generation mechanism of the 2016 M6.4 Meilung earthquake, a large number of high-quality travel times from P- and S-wave source-receiver pairs are used jointly in this study to invert three-dimensional (3-D) seismic velocity (Vp, Vs) and Poisson's ratio structures at high resolution. We also calculated crack density, saturate fracture, and bulk-sound velocity from our inverted Vp, Vs, and s models. In this way, multi-geophysical parameter imaging revealed that the 2016 Meilung earthquake occurred along a distinctive edge portion exhibiting high-to-low variations in these parameters in both horizontal and vertical directions across the hypocenter. We consider that a slow velocity and high-Poisson ratio body that has high-crack density and somewhat high-saturate fracture anomalies above the hypocenter under the coastal plain represents fluids contained in the young fold-and-thrust belt relative to the passive Asian continental margin in southwestern Taiwan. Intriguing, a continuous low Vp and Vs zone with high Poisson ratio, crack density and saturate fracture anomalies across the Laonung and Chishan faults is also clearly imaged in the northwestern upper crust beneath the coastal plain and western foothills as far as the southeastern lower crust under the central range. We therefore propose that this southeastern extending weakened zone was mainly the result of a fluid intrusion either from the young fold-and-thrust belt associated with the passive Asian continental margin in the shallow crust or the subducted Eurasian continental (EC) plate in the lower crust and uppermost mantle. We suggest that fluid intrusion into the upper Oligocene to Pleistocene shallow marine and clastic shelf units of the Eurasian continental crust and/or the relatively thin uppermost part of the transitional Pleistocene-Holocene foreland due to the subduction of the EC plate along the deformation front played a key role in earthquake generation in

  18. Broadband Rupture Process of the 2001 Kunlun Fault (Mw 7.8) Earthquake

    NASA Astrophysics Data System (ADS)

    Antolik, M.; Abercrombie, R.; Ekstrom, G.

    2003-04-01

    We model the source process of the 14 November, 2001 Kunlun fault earthquake using broadband body waves from the Global Digital Seismographic Network (P, SH) and both point-source and distributed slip techniques. The point-source mechanism technique is a non-linear iterative inversion that solves for focal mechanism, moment rate function, depth, and rupture directivity. The P waves reveal a complex rupture process for the first 30 s, with smooth unilateral rupture toward the east along the Kunlun fault accounting for the remainder of the 120 s long rupture. The obtained focal mechanism for the main portion of the rupture is (strike=96o, dip=83o, rake=-8o) which is consistent with both the Harvard CMT solution and observations of the surface rupture. The seismic moment is 5.29×1020 Nm and the average rupture velocity is ˜3.5 km/s. However, the initial portion of the P waves cannot be fit at all with this mechanism. A strong pulse visible in the first 20 s can only be matched with an oblique-slip subevent (MW ˜ 6.8-7.0) involving a substantial normal faulting component, but the nodal planes of this mechanism are not well constrained. The first-motion polarities of the P waves clearly require a strike mechanism with a similar orientation as the Kunlun fault. Field observations of the surface rupture (Xu et al., SRL, 73, No. 6) reveal a small 26 km-long strike-slip rupture at the far western end (90.5o E) with a 45-km long gap and extensional step-over between this rupture and the main Kunlun fault rupture. We hypothesize that the initial fault break occurred on this segment, with release of the normal faulting energy as a continuous rupture through the extensional step, enabling transfer of the slip to the main Kunlun fault. This process is similar to that which occurred during the 2002 Denali fault (MW 7.9) earthquake sequence except that 11 days elapsed between the October 23 (M_W 6.7) foreshock and the initial break of the Denali earthquake along a thrust fault.

  19. Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.

    2014-04-01

    The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported

  20. Signals in the ionosphere generated by tsunami earthquakes: observations and modeling suppor

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Sladen, A.; Mikesell, D.; Larmat, C. S.; Rakoto, V.; Remillieux, M.; Lee, R.; Khelfi, K.; Lognonne, P. H.; Astafyeva, E.

    2017-12-01

    Forecasting systems failed to predict the magnitude of the 2011 great tsunami in Japan due to the difficulty and cost of instrumenting the ocean with high-quality and dense networks. Melgar et al. (2013) show that using all of the conventional data (inland seismic, geodetic, and tsunami gauges) with the best inversion method still fails to predict the correct height of the tsunami before it breaks onto a coast near the epicenter (< 500 km). On the other hand, in the last decade, scientists have gathered convincing evidence of transient signals in the ionosphere Total Electron Content (TEC) observations that are associated to open ocean tsunami waves. Even though typical tsunami waves are only a few centimeters high, they are powerful enough to create atmospheric vibrations extending all the way to the ionosphere, 300 kilometers up in the atmosphere. Therefore, we are proposing to incorporate the ionospheric signals into tsunami early-warning systems. We anticipate that the method could be decisive for mitigating "tsunami earthquakes" which trigger tsunamis larger than expected from their short-period magnitude. These events are challenging to characterize as they rupture the near-trench subduction interface, in a distant region less constrained by onshore data. As a couple of devastating tsunami earthquakes happens per decade, they represent a real threat for onshore populations and a challenge for tsunami early-warning systems. We will present the TEC observations of the recent Java 2006 and Mentawaii 2010 tsunami earthquakes and base our analysis on acoustic ray tracing, normal modes summation and the simulation code SPECFEM, which solves the wave equation in coupled acoustic (ocean, atmosphere) and elastic (solid earth) domains. Rupture histories are entered as finite source models, which will allow us to evaluate the effect of a relatively slow rupture on the surrounding ocean and atmosphere.

  1. Earthquake Rupture Process Inferred from Joint Inversion of 1-Hz GPS and Strong Motion Data: The 2008 Iwate-Miyagi Nairiku, Japan, Earthquake

    NASA Astrophysics Data System (ADS)

    Yokota, Y.; Koketsu, K.; Hikima, K.; Miyazaki, S.

    2009-12-01

    1-Hz GPS data can be used as a ground displacement seismogram. The capability of high-rate GPS to record seismic wave fields for large magnitude (M8 class) earthquakes has been demonstrated [Larson et al., 2003]. Rupture models were inferred solely and supplementarily from 1-Hz GPS data [Miyazaki et al., 2004; Ji et al., 2004; Kobayashi et al., 2006]. However, none of the previous studies have succeeded in inferring the source process of the medium-sized (M6 class) earthquake solely from 1-Hz GPS data. We first compared 1-Hz GPS data with integrated strong motion waveforms for the 2008 Iwate-Miyagi Nairiku, Japan, earthquake. We performed a waveform inversion for the rupture process using 1-Hz GPS data only [Yokota et al., 2009]. We here discuss the rupture processes inferred from the inversion of 1-Hz GPS data of GEONET only, the inversion of strong motion data of K-NET and KiK-net only, and the joint inversion of 1-Hz GPS and strong motion data. The data were inverted to infer the rupture process of the earthquake using the inversion codes by Yoshida et al. [1996] with the revisions by Hikima and Koketsu [2005]. In the 1-Hz GPS inversion result, the total seismic moment is 2.7×1019 Nm (Mw: 6.9) and the maximum slip is 5.1 m. These results are approximately equal to 2.4×1019 Nm and 4.5 m from the inversion of strong motion data. The difference in the slip distribution on the northern fault segment may come from long-period motions possibly recorded only in 1-Hz GPS data. In the joint inversion result, the total seismic moment is 2.5×1019 Nm and the maximum slip is 5.4 m. These values also agree well with the result of 1-Hz GPS inversion. In all the series of snapshots that show the dynamic features of the rupture process, the rupture propagated bilaterally from the hypocenter to the south and north. The northern rupture speed is faster than the northern one. These agreements demonstrate the ability of 1-Hz GPS data to infer not only static, but also dynamic

  2. Implications of next generation attenuation ground motion prediction equations for site coefficients used in earthquake resistant design

    Borcherdt, Roger D.

    2014-01-01

    Proposals are developed to update Tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures published as American Society of Civil Engineers Structural Engineering Institute standard 7-10 (ASCE/SEI 7–10). The updates are mean next generation attenuation (NGA) site coefficients inferred directly from the four NGA ground motion prediction equations used to derive the maximum considered earthquake response maps adopted in ASCE/SEI 7–10. Proposals include the recommendation to use straight-line interpolation to infer site coefficients at intermediate values of (average shear velocity to 30-m depth). The NGA coefficients are shown to agree well with adopted site coefficients at low levels of input motion (0.1 g) and those observed from the Loma Prieta earthquake. For higher levels of input motion, the majority of the adopted values are within the 95% epistemic-uncertainty limits implied by the NGA estimates with the exceptions being the mid-period site coefficient, Fv, for site class D and the short-period coefficient, Fa, for site class C, both of which are slightly less than the corresponding 95% limit. The NGA data base shows that the median value  of 913 m/s for site class B is more typical than 760 m/s as a value to characterize firm to hard rock sites as the uniform ground condition for future maximum considered earthquake response ground motion estimates. Future updates of NGA ground motion prediction equations can be incorporated easily into future adjustments of adopted site coefficients using procedures presented herein. 

  3. Concept Generation Process for Patient Transferring Device

    NASA Astrophysics Data System (ADS)

    Dandavate, A. L.; Sarje, S. H.

    2012-07-01

    In this paper, an attempt has been made to develop concepts for patient transferring tasks. The concept generation process of patient transferring device (PTD), which includes interviews of the customers, interpretation of the needs, organizing the needs into a hierarchy, establishing relative importance of the needs, establishing target specifications, and conceptualization has been discussed in this paper. The authors conducted the interviews of customers at Mobilink NGO, St. John's Hospital, Bangalore in order to know the needs and wants for the PTD. AHP technique was used for establishing and evaluating relative importance of needs, and based on the importance of the customer needs, concepts were developed through brainstorming.

  4. Detailed observations of California foreshock sequences: Implications for the earthquake initiation process

    Dodge, D.A.; Beroza, G.C.; Ellsworth, W.L.

    1996-01-01

    We find that foreshocks provide clear evidence for an extended nucleation process before some earthquakes. In this study, we examine in detail the evolution of six California foreshock sequences, the 1986 Mount Lewis (ML, = 5.5), the 1986 Chalfant (ML = 6.4), the. 1986 Stone Canyon (ML = 4.7), the 1990 Upland (ML = 5.2), the 1992 Joshua Tree (MW= 6.1), and the 1992 Landers (MW = 7.3) sequence. Typically, uncertainties in hypocentral parameters are too large to establish the geometry of foreshock sequences and hence to understand their evolution. However, the similarity of location and focal mechanisms for the events in these sequences leads to similar foreshock waveforms that we cross correlate to obtain extremely accurate relative locations. We use these results to identify small-scale fault zone structures that could influence nucleation and to determine the stress evolution leading up to the mainshock. In general, these foreshock sequences are not compatible with a cascading failure nucleation model in which the foreshocks all occur on a single fault plane and trigger the mainshock by static stress transfer. Instead, the foreshocks seem to concentrate near structural discontinuities in the fault and may themselves be a product of an aseismic nucleation process. Fault zone heterogeneity may also be important in controlling the number of foreshocks, i.e., the stronger the heterogeneity, the greater the number of foreshocks. The size of the nucleation region, as measured by the extent of the foreshock sequence, appears to scale with mainshock moment in the same manner as determined independently by measurements of the seismic nucleation phase. We also find evidence for slip localization as predicted by some models of earthquake nucleation. Copyright 1996 by the American Geophysical Union.

  5. Foam generator and viscometer apparatus and process

    DOEpatents

    Reed, Troy D.; Pickell, Mark B.; Volk, Leonard J.

    2004-10-26

    An apparatus and process to generate a liquid-gas-surfactant foam and to measure its viscosity and enable optical and or electronic measurements of physical properties. The process includes the steps of pumping selected and measured liquids and measured gases into a mixing cell. The mixing cell is pressurized to a desired pressure and maintained at a desired pressure. Liquids and gas are mixed in the mixing cell to produce a foam of desired consistency. The temperature of the foam in the mixing cell is controlled. Foam is delivered from the mixing cell through a viscometer under controlled pressure and temperature conditions where the viscous and physical properties of the foam are measured and observed.

  6. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  7. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    The Longmen Shan, located in the conjunction of the eastern margin the Tibet plateau and Sichuan basin, is a typical area for studying the deformation pattern of the Tibet plateau. Following the 2008 Mw 7.9 Wenchuan earthquake (WE) rupturing the Longmen Shan Fault (LSF), a great deal of observations and studies on geology, geophysics, and geodesy have been carried out for this region, with results published successively in recent years. Using the 2D viscoelastic finite element model, introducing the rate-state friction law to the fault, this thesis makes modeling of the earthquake recurrence process and the dynamic evolutionary processes in an earthquake cycle of 10 thousand years. By analyzing the displacement, velocity, stresses, strain energy and strain energy increment fields, this work obtains the following conclusions: (1) The maximum coseismic displacement on the fault is on the surface, and the damage on the hanging wall is much more serious than that on the foot wall of the fault. If the detachment layer is absent, the coseismic displacement would be smaller and the relative displacement between the hanging wall and foot wall would also be smaller. (2) In every stage of the earthquake cycle, the velocities (especially the vertical velocities) on the hanging wall of the fault are larger than that on the food wall, and the values and the distribution patterns of the velocity fields are similar. While in the locking stage prior to the earthquake, the velocities in crust and the relative velocities between hanging wall and foot wall decrease. For the model without the detachment layer, the velocities in crust in the post-seismic stage is much larger than those in other stages. (3) The maximum principle stress and the maximum shear stress concentrate around the joint of the fault and detachment layer, therefore the earthquake would nucleate and start here. (4) The strain density distribution patterns in stages of the earthquake cycle are similar. There are two

  8. Atmospheric processes in reaction of Northern Sumatra Earthquake sequence Dec 2004-Apr 2005

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S.; Cervone, G.; Singh, R.; Taylor, P.

    2005-05-01

    This work describes our first results in analyzing data from different and independent sources ûemitted long-wavelength radiation (OLR), surface latent heat flux (SHLF) and GPS Total Electron Content (TEC) collected from ground based (GPS) and satellite TIR (thermal infra-red) data sources (NOAA/AVHRR, MODIS). We found atmosphere and ionosphere anomalies one week prior to both the Sumatra-Andaman Islands earthquake (Dec 26, 2004) and M 8.7 - Northern Sumatra, March 28, 2005. We analyzed 118 days of data from December 1, 2004 through April 1, 2005 for the area (0°-10°,north latitude and 90°-100° east longitude) which included 125 earthquakes with M>5.5. Recent analysis of the continuous OLR from the Earth surface indicates anomalous variations (on top of the atmosphere) prior to a number of medium to large earthquakes. In the case of M 9.0 - Sumatra-Andaman Islands event, compared to the reference fields for the months of December between 2001 and 2004, we found strongly OLR anomalous +80 W/m2 signals (two sigma) along the epicentral area on Dec 21, 2004 five days before the event. In the case of M8.7 March 28, 2005 anomalues signatures over the epicenter appears on March 26 is much weaker (only +20W/m2) and have a different topology. Anomalous values of SHLF associated with M9.0 - Sumatra-Andaman Islands were found on Dec 22, 2005 (SLHF +280Wm2) and less intensity on Mar 23, 2005 (SLHF +180Wm2). Ionospheric variations (GPS/TEC) associated with the Northern Sumatra events were determine by five Regional GPS network stations (COCO, BAKO, NTUS, HYDE and BAST2). For every station time series of the vertical TEC (VTEC) were computed together with correlation with the Dst index. On December 22, four days prior to the M9.0 quake GPS/TEC data reach the monthly maximum for COCO with minor DST activity. For the M 8.7-March 28 event, the increased values of GPS/TEC were observed during four days (March 22-25) in quiet geomagnetic background. Our results need additional

  9. Using Low-Frequency Earthquakes to Investigate Slow Slip Processes and Plate Interface Structure Beneath the Olympic Peninsula, WA

    NASA Astrophysics Data System (ADS)

    Chestler, Shelley

    This dissertation seeks to further understand the LFE source process, the role LFEs play in generating slow slip, and the utility of using LFEs to examine plate interface structure. The work involves the creation and investigation of a 2-year-long catalog of low-frequency earthquakes beneath the Olympic Peninsula, Washington. In the first chapter, we calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, WA. LFE moments range from 1.4x1010- 1.9x1012 N-m (M W=0.7-2.1). While regular earthquakes follow a power-law moment-frequency distribution with a b-value near 1 (the number of events increases by a factor of 10 for each unit increase in MW), we find that while for large LFEs the b-value is ˜6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0x1011 N-m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, sub-patch diameters, stress drops, and slip rates for LFEs during ETS events. We allow for LFEs to rupture smaller sub-patches within the LFE family patch. Models with 1-10 sub-patches produce slips of 0.1-1 mm, sub-patch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one sub-patch is often assumed, we believe 3-10 sub-patches are more likely. In the second chapter, using high-resolution relative low

  10. The Salton Seismic Imaging Project (SSIP): Rift Processes and Earthquake Hazards in the Salton Trough (Invited)

    NASA Astrophysics Data System (ADS)

    Hole, J. A.; Stock, J. M.; Fuis, G. S.; Rymer, M. J.; Murphy, J. M.; Sickler, R. R.; Criley, C. J.; Goldman, M.; Catchings, R. D.; Ricketts, J. W.; Gonzalez-Fernandez, A.; Driscoll, N.; Kent, G.; Harding, A. J.; Klemperer, S. L.

    2009-12-01

    The Salton Seismic Imaging Project (SSIP) and coordinated projects will acquire seismic data in and across the Salton Trough in southern California and northern Mexico, including the Coachella, Imperial, and Mexicali Valleys. These projects address both rifting processes at the northern end of the Gulf of California extensional province and earthquake hazards at the southern end of the San Andreas Fault system. In the central Salton Trough, North American lithosphere appears to have been rifted completely apart. Based primarily on a 1979 seismic refraction project, the 20-22 km thick crust is apparently composed entirely of new crust added by magmatism from below and sedimentation from above. The new data will constrain the style of continental breakup, the role and mode of magmatism, the effects of rapid Colorado River sedimentation upon extension and magmatism, and the partitioning of oblique extension. The southernmost San Andreas Fault is considered at high risk of producing a large damaging earthquake, yet structures of the fault and adjacent basins are poorly constrained. To improve hazard models, SSIP will image the geometry of the San Andreas and Imperial Faults, structure of sedimentary basins in the Salton Trough, and three-dimensional seismic velocity of the crust and uppermost mantle. SSIP and collaborating projects have been funded by several different programs at NSF and the USGS. These projects include seven lines of land refraction and low-fold reflection data, airguns and OBS data in the Salton Sea, coordinated fieldwork for onshore-offshore and 3-D data, and a densely sampled line of broadband stations across the trough. Fieldwork is tentatively scheduled for 2010. Preliminary work in 2009 included calibration shots in the Imperial Valley that quantified strong ground motion and proved lack of harm to agricultural irrigation tile drains from explosive shots. Piggyback and complementary studies are encouraged.

  11. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  12. Unsupervised Approaches for Post-Processing in Computationally Efficient Waveform-Similarity-Based Earthquake Detection

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Yoon, C. E.; OReilly, O. J.; Beroza, G. C.

    2015-12-01

    Recent improvements in computational efficiency for waveform correlation-based detections achieved by new methods such as Fingerprint and Similarity Thresholding (FAST) promise to allow large-scale blind search for similar waveforms in long-duration continuous seismic data. Waveform similarity search applied to datasets of months to years of continuous seismic data will identify significantly more events than traditional detection methods. With the anticipated increase in number of detections and associated increase in false positives, manual inspection of the detection results will become infeasible. This motivates the need for new approaches to process the output of similarity-based detection. We explore data mining techniques for improved detection post-processing. We approach this by considering similarity-detector output as a sparse similarity graph with candidate events as vertices and similarities as weighted edges. Image processing techniques are leveraged to define candidate events and combine results individually processed at multiple stations. Clustering and graph analysis methods are used to identify groups of similar waveforms and assign a confidence score to candidate detections. Anomaly detection and classification are applied to waveform data for additional false detection removal. A comparison of methods will be presented and their performance will be demonstrated on a suspected induced and non-induced earthquake sequence.

  13. Two regions of seafloor deformation generated the tsunami for the 13 November 2016, Kaikoura, New Zealand earthquake

    NASA Astrophysics Data System (ADS)

    Bai, Yefei; Lay, Thorne; Cheung, Kwok Fai; Ye, Lingling

    2017-07-01

    The 13 November 2016 Kaikoura, New Zealand, Mw 7.8 earthquake ruptured multiple crustal faults in the transpressional Marlborough and North Canterbury tectonic domains of northeastern South Island. The Hikurangi trench and underthrust Pacific slab terminate in the region south of Kaikoura, as the subdution zone transitions to the Alpine fault strike-slip regime. It is difficult to establish whether any coseismic slip occurred on the megathrust from on-land observations. The rupture generated a tsunami well recorded at tide gauges along the eastern coasts and in Chatham Islands, including a 4 m crest-to-trough signal at Kaikoura where coastal uplift was about 1 m, and at multiple gauges in Wellington Harbor. Iterative modeling of teleseismic body waves and the regional water-level recordings establishes that two regions of seafloor motion produced the tsunami, including an Mw 7.6 rupture on the megathrust below Kaikoura and comparable size transpressional crustal faulting extending offshore near Cook Strait.

  14. Stochastic strong motion generation using slip model of 21 and 22 May 1960 mega-thrust earthquakes in the main cities of Central-South Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Ojeda, J.; DelCampo, F., Sr.; Pasten, C., Sr.; Otarola, C., Sr.; Silva, R., Sr.

    2017-12-01

    In May 1960 took place the most unusual seismic sequence registered instrumentally. The Mw 8.1, Concepción earthquake occurred May, 21, 1960. The aftershocks of this event apparently migrated to the south-east, and the Mw 9.5, Valdivia mega-earthquake occurred after 33 hours. The structural damage produced by both events is not larger than other earthquakes in Chile and lower than crustal earthquakes of smaller magnitude. The damage was located in the sites with shallow soil layers of low shear wave velocity (Vs). However, no seismological station recorded this sequence. For that reason, we generate synthetic acceleration times histories for strong motion in the main cities affected by these events. We use 155 points of vertical surface displacements recopiled by Plafker and Savage in 1968, and considering the observations of this authors and local residents we separated the uplift and subsidence information associated to the first earthquake Mw 8.1 and the second mega-earthquake Mw 9.5. We consider the elastic deformation propagation, assume realist lithosphere geometry, and compute a Bayesian method that maximizes the probability density a posteriori to obtain the slip distribution. Subsequently, we use a stochastic method of generation of strong motion considering the finite fault model obtained for both earthquakes. We considered the incidence angle of ray to the surface, free surface effect and energy partition for P, SV and SH waves, dynamic corner frequency and the influence of site effect. The results show that the earthquake Mw 8.1 occurred down-dip the slab, the strong motion records are similar to other Chilean earthquake like Tocopilla Mw 7.7 (2007). For the Mw 9.5 earthquake we obtain synthetic acceleration time histories with PGA values around 0.8 g in cities near to the maximum asperity or that have low velocity soil layers. This allows us to conclude that strong motion records have important influence of the shallow soil deposits. These records

  15. Earthquake Testing

    NASA Technical Reports Server (NTRS)

    1979-01-01

    During NASA's Apollo program, it was necessary to subject the mammoth Saturn V launch vehicle to extremely forceful vibrations to assure the moonbooster's structural integrity in flight. Marshall Space Flight Center assigned vibration testing to a contractor, the Scientific Services and Systems Group of Wyle Laboratories, Norco, California. Wyle-3S, as the group is known, built a large facility at Huntsville, Alabama, and equipped it with an enormously forceful shock and vibration system to simulate the liftoff stresses the Saturn V would encounter. Saturn V is no longer in service, but Wyle-3S has found spinoff utility for its vibration facility. It is now being used to simulate earthquake effects on various kinds of equipment, principally equipment intended for use in nuclear power generation. Government regulations require that such equipment demonstrate its ability to survive earthquake conditions. In upper left photo, Wyle3S is preparing to conduct an earthquake test on a 25ton diesel generator built by Atlas Polar Company, Ltd., Toronto, Canada, for emergency use in a Canadian nuclear power plant. Being readied for test in the lower left photo is a large circuit breaker to be used by Duke Power Company, Charlotte, North Carolina. Electro-hydraulic and electro-dynamic shakers in and around the pit simulate earthquake forces.

  16. [Consumer's psychological processes of hoarding and avoidant purchasing after the Tohoku earthquake].

    PubMed

    Ohtomo, Shoji; Hirose, Yukio

    2014-02-01

    This study examined psychological processes of consumers that had determined hoarding and avoidant purchasing behaviors after the Tohoku earthquake within a dual-process model. The model hypothesized that both intentional motivation based on reflective decision and reactive motivation based on non-reflective decision predicted the behaviors. This study assumed that attitude, subjective norm and descriptive norm in relation to hoarding and avoidant purchasing were determinants of motivations. Residents in the Tokyo metropolitan area (n = 667) completed internet longitudinal surveys at three times (April, June, and November, 2011). The results indicated that intentional and reactive motivation determined avoidant purchasing behaviors in June; only intentional motivation determined the behaviors in November. Attitude was a main determinant of the motivations each time. Moreover, previous behaviors predicted future behaviors. In conclusion, purchasing behaviors were intentional rather than reactive behaviors. Furthermore, attitude and previous behaviors were important determinants in the dual-process model. Attitude and behaviors formed in April continued to strengthen the subsequent decisions of purchasing behavior.

  17. A new 1649-1884 catalog of destructive earthquakes near Tokyo and implications for the long-term seismic process

    Grunewald, E.D.; Stein, R.S.

    2006-01-01

    In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.

  18. Source Rupture Process of the 2016 Kumamoto, Japan, Earthquake Inverted from Strong-Motion Records

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Zheng, Ao

    2017-04-01

    On 15 April, 2016 the great earthquake with magnitude Mw7.1 occurred in Kumamoto prefecture, Japan. The focal mechanism solution released by F-net located the hypocenter at 130.7630°E, 32.7545°N, at a depth of 12.45 km, and the strike, dip, and the rake angle of the fault were N226°E, 84˚ and -142° respectively. The epicenter distribution and focal mechanisms of aftershocks implied the mechanism of the mainshock might have changed in the source rupture process, thus a single focal mechanism was not enough to explain the observed data adequately. In this study, based on the inversion result of GNSS and InSAR surface deformation with active structures for reference, we construct a finite fault model with focal mechanism changes, and derive the source rupture process by multi-time-window linear waveform inversion method using the strong-motion data (0.05 1.0Hz) obtained by K-NET and KiK-net of Japan. Our result shows that the Kumamoto earthquake is a right-lateral strike slipping rupture event along the Futagawa-Hinagu fault zone, and the seismogenic fault is divided into a northern segment and a southern one. The strike and the dip of the northern segment are N235°E, 60˚ respectively. And for the southern one, they are N205°E, 72˚ respectively. The depth range of the fault model is consistent with the depth distribution of aftershocks, and the slip on the fault plane mainly concentrate on the northern segment, in which the maximum slip is about 7.9 meter. The rupture process of the whole fault continues for approximately 18-sec, and the total seismic moment released is 5.47×1019N·m (Mw 7.1). In addition, the essential feature of the distribution of PGV and PGA synthesized by the inversion result is similar to that of observed PGA and seismic intensity.

  19. From Geodesy to Tectonics: Observing Earthquake Processes from Space (Augustus Love Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Parsons, Barry

    2017-04-01

    A suite of powerful satellite-based techniques has been developed over the past two decades allowing us to measure and interpret variations in the deformation around active continental faults occurring in earthquakes, before the earthquakes as strain accumulates, and immediately following them. The techniques include radar interferometry and the measurement of vertical and horizontal surface displacements using very high-resolution (VHR) satellite imagery. They provide near-field measurements of earthquake deformation facilitating the association with the corresponding active faults and their topographic expression. The techniques also enable pre- and post-seismic deformation to be determined and hence allow the response of the fault and surrounding medium to changes in stress to be investigated. The talk illustrates both the techniques and the applications with examples from recent earthquakes. These include the 2013 Balochistan earthquake, a predominantly strike-slip event, that occurred on the arcuate Hoshab fault in the eastern Makran linking an area of mainly left-lateral shear in the east to one of shortening in the west. The difficulty of reconciling predominantly strike-slip motion with this shortening has led to a wide range of unconventional kinematic and dynamic models. Using pre-and post-seismic VHR satellite imagery, we are able to determine a 3-dimensional deformation field for the earthquake; Sentinel-1 interferometry shows an increase in the rate of creep on a creeping section bounding the northern end of the rupture in response to the earthquake. In addition, we will look at the 1978 Tabas earthquake for which no measurements of deformation were possible at the time. By combining pre-seismic 'spy' satellite images with modern imagery, and pre-seismic aerial stereo images with post-seismic satellite stereo images, we can determine vertical and horizontal displacements from the earthquake and subsequent post-seismic deformation. These observations

  20. Source Rupture Process for the February 21, 2011, Mw6.1, New Zealand Earthquake and the Characteristics of Near-field Strong Ground Motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Shi, B.

    2011-12-01

    The New Zealand Earthquake of February 21, 2011, Mw 6.1 occurred in the South Island, New Zealand with the epicenter at longitude 172.70°E and latitude 43.58°S, and with depth of 5 km. The Mw 6.1 earthquake occurred on an unknown blind fault involving oblique-thrust faulting, which is 9 km away from southern of the Christchurch, the third largest city of New Zealand, with a striking direction from east toward west (United State Geology Survey, USGS, 2011). The earthquake killed at least 163 people and caused a lot of construction damages in Christchurch city. The Peak Ground Acceleration (PGA) observed at station Heathcote Valley Primary School (HVSC), which is 1 km away from the epicenter, is up to almost 2.0g. The ground-motion observation suggests that the buried earthquake source generates much higher near-fault ground motion. In this study, we have analyzed the earthquake source spectral parameters based on the strong motion observations, and estimated the near-fault ground motion based on the Brune's circular fault model. The results indicate that the larger ground motion may be caused by a higher dynamic stress drop,Δσd , or effect stress drop named by Brune, in the major source rupture region. In addition, a dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties from the kinematic point of view. For comparison purpose, we also conducted the broadband ground motion predictions for the station of HVSC; the synthetic seismogram of time histories produced for this station has good agreement with the observations in the waveforms, peak values and frequency contents, which clearly indicate that the higher dynamic stress drop during the fault rupture may play an important role to the anomalous ground-motion amplification. The preliminary simulated result illustrated in at Station HVSC is that the synthetics seismograms have a realistic appearance in the waveform and

  1. Effects of the Alaska earthquake of March 27, 1964, on shore processes and beach morphology: Chapter J in The Alaska earthquake, March 27, 1964: regional effects

    Stanley, Kirk W.

    1968-01-01

    Some 10,000 miles of shoreline in south-central Alaska was affected by the subsidence or uplift associated with the great Alaska earthquake of March 27, 1964. The changes in shoreline processes and beach morphology that were suddenly initiated by the earthquake were similar to those ordinarily caused by gradual changes in sea level operating over hundreds of years, while other more readily visible changes were similar to some of the effects of great but short-lived storms. Phenomena became available for observation within a few hours which would otherwise not have been available for many years. In the subsided areas—including the shorelines of the Kenai Peninsula, Kodiak Island, and Cook Inlet—beaches tended to flatten in gradient and to recede shoreward. Minor beach features were altered or destroyed on submergence but began to reappear and to stabilize in their normal shapes within a few months after the earthquake. Frontal beach ridges migrated shoreward and grew higher and wider than they were before. Along narrow beaches backed by bluffs, the relatively higher sea level led to vigorous erosion of the bluff toes. Stream mouths were drowned and some were altered by seismic sea waves, but they adjusted within a few months to the new conditions. In the uplifted areas, generally around Prince William Sound, virtually all beaches were stranded out of reach of the sea. New beaches are gradually developing to fit new sea levels, but the processes are slow, in part because the material on the lower parts of the old beaches is predominantly fine grained. Streams were lengthened in the emergent areas, and down cutting and bank erosion have increased. Except at Homer and a few small villages, where groins, bulkheads, and cobble-filled baskets were installed, there has been little attempt to protect the postearthquake shorelines. The few structures that were built have been only partially successful because there was too little time to study the habits of the new shore

  2. Rupture Speed and Dynamic Frictional Processes for the 1995 ML4.1 Shacheng, Hebei, China, Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Liu, B.; Shi, B.

    2010-12-01

    An earthquake with ML4.1 occurred at Shacheng, Hebei, China, on July 20, 1995, followed by 28 aftershocks with 0.9≤ML≤4.0 (Chen et al, 2005). According to ZÚÑIGA (1993), for the 1995 ML4.1 Shacheng earthquake sequence, the main shock is corresponding to undershoot, while aftershocks should match overshoot. With the suggestion that the dynamic rupture processes of the overshoot aftershocks could be related to the crack (sub-fault) extension inside the main fault. After main shock, the local stresses concentration inside the fault may play a dominant role in sustain the crack extending. Therefore, the main energy dissipation mechanism should be the aftershocks fracturing process associated with the crack extending. We derived minimum radiation energy criterion (MREC) following variational principle (Kanamori and Rivera, 2004)(ES/M0')min≧[3M0/(ɛπμR3)](v/β)3, where ES and M0' are radiated energy and seismic moment gained from observation, μ is the modulus of fault rigidity, ɛ is the parameter of ɛ=M0'/M0,M0 is seismic moment and R is rupture size on the fault, v and β are rupture speed and S-wave speed. From II and III crack extending model, we attempt to reconcile a uniform expression for calculate seismic radiation efficiency ηG, which can be used to restrict the upper limit efficiency and avoid the non-physics phenomenon that radiation efficiency is larger than 1. In ML 4.1 Shacheng earthquake sequence, the rupture speed of the main shock was about 0.86 of S-wave speed β according to MREC, closing to the Rayleigh wave speed, while the rupture speeds of the remained 28 aftershocks ranged from 0.05β to 0.55β. The rupture speed was 0.9β, and most of the aftershocks are no more than 0.35β using II and III crack extending model. In addition, the seismic radiation efficiencies for this earthquake sequence were: for the most aftershocks, the radiation efficiencies were less than 10%, inferring a low seismic efficiency, whereas the radiation efficiency

  3. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed Central

    Kanamori, H

    1996-01-01

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657

  4. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  5. Source process of the 2016 Kumamoto earthquake (Mj7.3) inferred from kinematic inversion of strong-motion records

    NASA Astrophysics Data System (ADS)

    Yoshida, Kunikazu; Miyakoshi, Ken; Somei, Kazuhiro; Irikura, Kojiro

    2017-05-01

    In this study, we estimated source process of the 2016 Kumamoto earthquake from strong-motion data by using the multiple-time window linear kinematic waveform inversion method to discuss generation of strong motions and to explain crustal deformation pattern with a seismic source inversion model. A four-segment fault model was assumed based on the aftershock distribution, active fault traces, and interferometric synthetic aperture radar data. Three western segments were set to be northwest-dipping planes, and the most eastern segment under the Aso caldera was examined to be a southeast-dipping plane. The velocity structure models used in this study were estimated by using waveform modeling of moderate earthquakes that occurred in the source region. We applied a two-step approach of the inversions of 20 strong-motion datasets observed by K-NET and KiK-net by using band-pass-filtered strong-motion data at 0.05-0.5 Hz and then at 0.05-1.0 Hz. The rupture area of the fault plane was determined by applying the criterion of Somerville et al. (Seismol Res Lett 70:59-80, 1999) to the inverted slip distribution. From the first-step inversion, the fault length was trimmed from 52 to 44 km, whereas the fault width was kept at 18 km. The trimmed rupture area was not changed in the second-step inversion. The source model obtained from the two-step approach indicated 4.7 × 1019 Nm of the total moment release and 1.8 m average slip of the entire fault with a rupture area of 792 km2. Large slip areas were estimated in the seismogenic zone and in the shallow part corresponding to the surface rupture that occurred during the Mj7.3 mainshock. The areas of the high peak moment rate correlated roughly with those of large slip; however, the moment rate functions near the Earth surface have low peak, bell shape, and long duration. These subfaults with long-duration moment release are expected to cause weak short-period ground motions. We confirmed that the southeast dipping of the most

  6. Validation of the Earthquake Archaeological Effects methodology by studying the San Clemente cemetery damages generated during the Lorca earthquake of 2011

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel; Martín-Velazquez, Silvia; Rodrigez-Pascua, Miguel Angel; Pérez-López, Raul; Silva, Pablo

    2014-05-01

    The intensity scales determined the damage caused by an earthquake. However, a new methodology takes into account not only the damage but the type of damage "Earthquake Archaeological Effects", EAE's, and its orientation (e.g. displaced masonry blocks, conjugated fractures, fallen and oriented columns, impact marks, dipping broken corners, etc.) (Rodriguez-Pascua et al., 2011; Giner-Robles et al., 2012). Its main contribution is that it focuses not only on the amount of damage but also in its orientation, giving information about the ground motion during the earthquake. Therefore, this orientations and instrumental data can be correlated with historical earthquakes. In 2011 an earthquake of magnitude Mw 5.2 took place in Lorca (SE Spain) (9 casualties and 460 million Euros in reparations). The study of the EAE's was carried out through the whole city (Giner-Robles et al., 2012). The present study aimed to a.- validate the EAE's methodology using it only in a small place, specifically the cemetery of San Clemente in Lorca, and b.- constraining the range of orientation for each EAE's. This cemetery has been selected because these damage orientation data can be correlated with instrumental information available, and also because this place has: a.- wide variety of architectural styles (neogothic, neobaroque, neoarabian), b.- its Cultural Interest (BIC), and c.- different building materials (brick, limestone, marble). The procedure involved two main phases: a.- inventory and identification of damage (EAE's) by pictures, and b.- analysis of the damage orientations. The orientation was calculated for each EAE's and plotted in maps. Results are NW-SE damage orientation. This orientation is consistent with that recorded in the accelerometer of Lorca (N160°E) and with that obtained from the analysis of EAE's for the whole town of Lorca (N130°E) (Giner-Robles et al., 2012). Due to the existence of an accelerometer, we know the orientation of the peak ground acceleration

  7. LISP based simulation generators for modeling complex space processes

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing

    1987-01-01

    The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.

  8. Redefining Earthquakes and the Earthquake Machine

    ERIC Educational Resources Information Center

    Hubenthal, Michael; Braile, Larry; Taber, John

    2008-01-01

    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  9. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth

  10. Long-period spectral features of the Sumatra-Andaman 2004 earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bukchin, B.; Favreau, P.; Mostinskiy, A.; Aoudia, A.; Panza, G. F.

    2012-12-01

    The goal of this study is to investigate the spatial variability of the seismic radiation spectral content of the Sumatra-Andaman 2004 earthquake. We determine the integral estimates of source geometry, duration and rupture propagation given by the stress glut moments of total degree 2 of different source models. These models are constructed from a single or a joint use of different observations including seismology, geodesy, altimetry and tide gauge data. The comparative analysis shows coherency among the different models and no strong contradictions are found between the integral estimates of geodetic and altimetric models, and those retrieved from very long period seismic records (up to 2000-3000 s). The comparison between these results and the integral estimates derived from observed surface wave spectra in period band from 500 to 650 s suggests that the northern part of the fault (to the north of 8°N near Nicobar Islands) did not radiate long period seismic waves, that is, period shorter than 650 s at least. This conclusion is consistent with the existing composite short and long rise time tsunami model: with short rise time of slip in the southern part of the fault and very long rise time of slip at the northern part. This complex space-time slip evolution can be reproduced by a simple dynamic model of the rupture assuming a crude phenomenological mechanical behaviour of the rupture interface at the fault scales combining an effective slip-controlled exponential weakening effect, related to possible friction and damage breakdown processes of the fault zone, and an effective linear viscous strengthening effect, related to possible interface lubrication processes. While the rupture front speed remains unperturbed with initial short slip duration, a slow creep wave propagates behind the rupture front in the case of viscous effects accounting for the long slip duration and the radiation characteristics in the northern segment.

  11. Variations in rupture process with recurrence interval in a repeated small earthquake

    Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris

    1994-01-01

    In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and

  12. A new source process for evolving repetitious earthquakes at Ngauruhoe volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Jolly, A. D.; Neuberg, J.; Jousset, P.; Sherburn, S.

    2012-02-01

    Since early 2005, Ngauruhoe volcano has produced repeating low-frequency earthquakes with evolving waveforms and spectral features which become progressively enriched in higher frequency energy during the period 2005 to 2009, with the trend reversing after that time. The earthquakes also show a seasonal cycle since January 2006, with peak numbers of events occurring in the spring and summer period and lower numbers of events at other times. We explain these patterns by the excitation of a shallow two-phase water/gas or water/steam cavity having temporal variations in volume fraction of bubbles. Such variations in two-phase systems are known to produce a large range of acoustic velocities (2-300 m/s) and corresponding changes in impedance contrast. We suggest that an increasing bubble volume fraction is caused by progressive heating of melt water in the resonant cavity system which, in turn, promotes the scattering excitation of higher frequencies, explaining both spectral shift and seasonal dependence. We have conducted a constrained waveform inversion and grid search for moment, position and source geometry for the onset of two example earthquakes occurring 17 and 19 January 2008, a time when events showed a frequency enrichment episode occurring over a period of a few days. The inversion and associated error analysis, in conjunction with an earthquake phase analysis show that the two earthquakes represent an excitation of a single source position and geometry. The observed spectral changes from a stationary earthquake source and geometry suggest that an evolution in both near source resonance and scattering is occurring over periods from days to months.

  13. The Early Warning System(EWS) as First Stage to Generate and Develop Shake Map for Bucharest to Deep Vrancea Earthquakes

    NASA Astrophysics Data System (ADS)

    Marmureanu, G.; Ionescu, C.; Marmureanu, A.; Grecu, B.; Cioflan, C.

    2007-12-01

    EWS made by NIEP is the first European system for real-time early detection and warning of the seismic waves in case of strong deep earthquakes. EWS uses the time interval (28-32 seconds) between the moment when earthquake is detected by the borehole and surface local accelerometers network installed in the epicenter area (Vrancea) and the arrival time of the seismic waves in the protected area, to deliver timely integrated information in order to enable actions to be taken before a main destructive shaking takes place. Early warning system is viewed as part of an real-time information system that provide rapid information, about an earthquake impeding hazard, to the public and disaster relief organizations before (early warning) and after a strong earthquake (shake map).This product is fitting in with other new product on way of National Institute for Earth Physics, that is, the shake map which is a representation of ground shaking produced by an event and it will be generated automatically following large Vrancea earthquakes. Bucharest City is located in the central part of the Moesian platform (age: Precambrian and Paleozoic) in the Romanian Plain, at about 140 km far from Vrancea area. Above a Cretaceous and a Miocene deposit (with the bottom at roundly 1,400 m of depth), a Pliocene shallow water deposit (~ 700m thick) was settled. The surface geology consists mainly of Quaternary alluvial deposits. Later loess covered these deposits and the two rivers crossing the city (Dambovita and Colentina) carved the present landscape. During the last century Bucharest suffered heavy damage and casualties due to 1940 (Mw = 7.7) and 1977 (Mw = 7.4) Vrancea earthquakes. For example, 32 high tall buildings collapsed and more then 1500 people died during the 1977 event. The innovation with comparable or related systems worldwide is that NIEP will use the EWS to generate a virtual shake map for Bucharest (140 km away of epicentre) immediately after the magnitude is estimated

  14. Nucleation process and dynamic inversion of the Mw 6.9 Valparaíso 2017 earthquake in Central Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Aden-Antoniow, F.; Baez, J. C., Sr.; Otarola, C., Sr.; Potin, B.; DelCampo, F., Sr.; Poli, P.; Flores, C.; Satriano, C.; Felipe, L., Sr.; Madariaga, R. I.

    2017-12-01

    The Valparaiso 2017 sequence occurred in mega-thrust Central Chile, an active zone where the last mega-earthquake occurred in 1730. An intense seismicity occurred 2 days before of the Mw 6.9 main-shock. A slow trench ward movement observed in the coastal GPS antennas accompanied the foreshock seismicity. Following the Mw 6.9 earthquake the seismicity migrated 30 Km to South-East. This sequence was well recorded by multi-parametric stations composed by GPS, Broad-Band and Strong Motion instruments. We built a seismic catalogue with 2329 events associated to Valparaiso sequence, with a magnitude completeness of Ml 2.8. We located all the seismicity considering a new 3D velocity model obtained for the Valparaiso zone, and compute the moment tensor for events with magnitude larger than Ml 3.5, and finally studied the presence of repeating earthquakes. The main-shock is studied by performing a dynamic inversion using the strong motion records and an elliptical patch approach to characterize the rupture process. During the two days nucleation stage, we observe a compact zone of repeater events. In the meantime a westward GPS movement was recorded in the coastal GPS stations. The aseismic moment estimated from GPS is larger than the foreshocks cumulative moment, suggesting the presence of a slow slip event, which potentially triggered the 6.9 mainshock. The Mw 6.9 earthquake is associated to rupture of an elliptical asperity of semi-axis of 10 km and 5 km, with a sub-shear rupture, stress drop of 11.71 MPa, yield stress of 17.21 MPa, slip weakening of 0.65 m and kappa value of 1.70. This sequence occurs close to, and with some similar characteristics that 1985 Valparaíso Mw 8.0 earthquake. The rupture of this asperity could stress more the highly locked Central Chile zone where a mega-thrust earthquake like 1730 is expected.

  15. Generating functions and stability study of multivariate self-excited epidemic processes

    NASA Astrophysics Data System (ADS)

    Saichev, A. I.; Sornette, D.

    2011-09-01

    We present a stability study of the class of multivariate self-excited Hawkes point processes, that can model natural and social systems, including earthquakes, epileptic seizures and the dynamics of neuron assemblies, bursts of exchanges in social communities, interactions between Internet bloggers, bank network fragility and cascading of failures, national sovereign default contagion, and so on. We present the general theory of multivariate generating functions to derive the number of events over all generations of various types that are triggered by a mother event of a given type. We obtain the stability domains of various systems, as a function of the topological structure of the mutual excitations across different event types. We find that mutual triggering tends to provide a significant extension of the stability (or subcritical) domain compared with the case where event types are decoupled, that is, when an event of a given type can only trigger events of the same type.

  16. Poro-elastic Rebound Along the Landers 1992 Earthquake Surface Rupture

    NASA Technical Reports Server (NTRS)

    Peltzer, G.; Rosen, P.; Rogez, F.; Hudnut, K.

    1998-01-01

    Maps of post-seismic surface displacement after the 1992, Landers, California earthquake, generated by interferometric processing of ERS-1 Synthetic Aperture Radar (SAR) images, reveal effects of various deformation processes near the 1992 surface rupture.

  17. Study of the characteristics of seismic signals generated by natural and cultural phenomena. [such as earthquakes, sonic booms, and nuclear explosions

    NASA Technical Reports Server (NTRS)

    Goforth, T. T.; Rasmussen, R. K.

    1974-01-01

    Seismic data recorded at the Tonto Forest Seismological Observatory in Arizona and the Uinta Basin Seismological Observatory in Utah were used to compare the frequency of occurrence, severity, and spectral content of ground motions resulting from earthquakes, and other natural and man-made sources with the motions generated by sonic booms. A search of data recorded at the two observatories yielded a classification of over 180,000 earthquake phase arrivals on the basis of frequency of occurrence versus maximum ground velocity. The majority of the large ground velocities were produced by seismic surface waves from moderate to large earthquakes in the western United States, and particularly along the Pacific Coast of the United States and northern Mexico. A visual analysis of raw film seismogram data over a 3-year period indicates that local and regional seismic events, including quarry blasts, are frequent in occurrence, but do not produce ground motions at the observatories comparable to either the large western United States earthquakes or to sonic booms. Seismic data from the Nevada Test Site nuclear blasts were used to derive magnitude-distance-sonic boom overpressure relations.

  18. Earthquake source nucleation process in the zone of a permanently creeping deep fault

    NASA Astrophysics Data System (ADS)

    Lykov, V. I.; Mostryukov, A. O.

    2008-10-01

    The worldwide practice of earthquake prediction, whose beginning relates to the 1970s, shows that spatial manifestations of various precursors under real seismotectonic conditions are very irregular. As noted in [Kurbanov et al., 1980], zones of bending, intersection, and branching of deep faults, where conditions are favorable for increasing tangential tectonic stresses, serve as “natural amplifiers” of precursory effects. The earthquake of September 28, 2004, occurred on the Parkfield segment of the San Andreas deep fault in the area of a local bending of its plane. The fault segment about 60 km long and its vicinities are the oldest prognostic area in California. Results of observations before and after the earthquake were promptly analyzed and published in a special issue of Seismological Research Letters (2005, Vol. 76, no. 1). We have an original method enabling the monitoring of the integral rigidity of seismically active rock massifs. The integral rigidity is determined from the relative numbers of brittle and viscous failure acts during the formation of source ruptures of background earthquakes in a given massif. Fracture mechanisms are diagnosed from the steepness of the first arrival of the direct P wave. Principles underlying our method are described in [Lykov and Mostryukov, 1996, 2001, 2003]. Results of monitoring have been directly displayed at the site of the Laboratory ( http://wwwbrk.adm.yar.ru/russian/1_512/index.html ) since the mid-1990s. It seems that this information has not attracted the attention of American seismologists. This paper assesses the informativeness of the rigidity monitoring at the stage of formation of a strong earthquake source in relation to other methods.

  19. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  20. Evaluating a kinematic method for generating broadband ground motions for great subduction zone earthquakes: Application to the 2003 Mw 8.3 Tokachi‐Oki earthquake

    Wirth, Erin A.; Frankel, Arthur; Vidale, John E.

    2017-01-01

    We compare broadband synthetic seismograms with recordings of the 2003 Mw">MwMw 8.3 Tokachi‐Oki earthquake to evaluate a compound rupture model, in which slip on the fault consists of multiple high‐stress‐drop asperities superimposed on a background slip distribution with longer rise times. Low‐frequency synthetics (<1  Hz"><1  Hz<1  Hz) are calculated using deterministic, 3D finite‐difference simulations and are combined with high‐frequency (>1  Hz">>1  Hz>1  Hz) stochastic synthetics using a matched filter at 1 Hz. We show that this compound rupture model and overall approach accurately reproduces waveform envelopes and observed response spectral accelerations (SAs) from the Tokachi‐Oki event. We find that sufficiently short subfault rise times (i.e., <∼1–2  s"><∼1–2  s<∼1–2  s) are necessary to reproduce energy ∼1  Hz">∼1  Hz∼1  Hz. This is achieved by either (1) including distinct subevents with short rise times, as may be suggested by the Tokachi‐Oki data, or (2) imposing a fast‐slip velocity over the entire rupture area. We also include a systematic study on the effects of varying several kinematic rupture parameters. We find that simulated strong ground motions are sensitive to the average rupture velocity and coherence of the rupture front, with more coherent ruptures yielding higher response SAs. We also assess the effects of varying the average slip velocity and the character (i.e., area, magnitude, and location) of high‐stress‐drop subevents. Even in the absence of precise constraints on these kinematic rupture parameters, our simulations still reproduce major features in the Tokachi‐Oki earthquake data, supporting its accuracy in modeling future large earthquakes.

  1. Insight into the rupture process of a rare tsunami earthquake from near-field high-rate GPS

    NASA Astrophysics Data System (ADS)

    Macpherson, K. A.; Hill, E. M.; Elosegui, P.; Banerjee, P.; Sieh, K. E.

    2011-12-01

    We investigated the rupture duration and velocity of the October 25, 2010 Mentawai earthquake by examining high-rate GPS displacement data. This Mw=7.8 earthquake appears to have ruptured either an up-dip part of the Sumatran megathrust or a fore-arc splay fault, and produced tsunami run-ups on nearby islands that were out of proportion with its magnitude. It has been described as a so-called "slow tsunami earthquake", characterised by a dearth of high-frequency signal and long rupture duration in low-strength, near-surface media. The event was recorded by the Sumatran GPS Array (SuGAr), a network of high-rate (1 sec) GPS sensors located on the nearby islands of the Sumatran fore-arc. For this study, the 1 sec time series from 8 SuGAr stations were selected for analysis due to their proximity to the source and high-quality recordings of both static displacements and dynamic waveforms induced by surface waves. The stations are located at epicentral distances of between 50 and 210 km, providing a unique opportunity to observe the dynamic source processes of a tsunami earthquake from near-source, high-rate GPS. We estimated the rupture duration and velocity by simulating the rupture using the spectral finite-element method SPECFEM and comparing the synthetic time series to the observed surface waves. A slip model from a previous study, derived from the inversion of GPS static offsets and tsunami data, and the CRUST2.0 3D velocity model were used as inputs for the simulations. Rupture duration and velocity were varied for a suite of simulations in order to determine the parameters that produce the best-fitting waveforms.

  2. Numerical study of tsunami generated by multiple submarine slope failures in Resurrection Bay, Alaska, during the MW 9.2 1964 earthquake

    Suleimani, E.; Hansen, R.; Haeussler, Peter J.

    2009-01-01

    We use a viscous slide model of Jiang and LeBlond (1994) coupled with nonlinear shallow water equations to study tsunami waves in Resurrection Bay, in south-central Alaska. The town of Seward, located at the head of Resurrection Bay, was hit hard by both tectonic and local landslide-generated tsunami waves during the MW 9.2 1964 earthquake with an epicenter located about 150 km northeast of Seward. Recent studies have estimated the total volume of underwater slide material that moved in Resurrection Bay during the earthquake to be about 211 million m3. Resurrection Bay is a glacial fjord with large tidal ranges and sediments accumulating on steep underwater slopes at a high rate. Also, it is located in a seismically active region above the Aleutian megathrust. All these factors make the town vulnerable to locally generated waves produced by underwater slope failures. Therefore it is crucial to assess the tsunami hazard related to local landslide-generated tsunamis in Resurrection Bay in order to conduct comprehensive tsunami inundation mapping at Seward. We use numerical modeling to recreate the landslides and tsunami waves of the 1964 earthquake to test the hypothesis that the local tsunami in Resurrection Bay has been produced by a number of different slope failures. We find that numerical results are in good agreement with the observational data, and the model could be employed to evaluate landslide tsunami hazard in Alaska fjords for the purposes of tsunami hazard mitigation. ?? Birkh??user Verlag, Basel 2009.

  3. Satellite Geodetic Constraints On Earthquake Processes: Implications of the 1999 Turkish Earthquakes for Fault Mechanics and Seismic Hazards on the San Andreas Fault

    NASA Technical Reports Server (NTRS)

    Reilinger, Robert

    2005-01-01

    Our principal activities during the initial phase of this project include: 1) Continued monitoring of postseismic deformation for the 1999 Izmit and Duzce, Turkey earthquakes from repeated GPS survey measurements and expansion of the Marmara Continuous GPS Network (MAGNET), 2) Establishing three North Anatolian fault crossing profiles (10 sitedprofile) at locations that experienced major surface-fault earthquakes at different times in the past to examine strain accumulation as a function of time in the earthquake cycle (2004), 3) Repeat observations of selected sites in the fault-crossing profiles (2005), 4) Repeat surveys of the Marmara GPS network to continue to monitor postseismic deformation, 5) Refining block models for the Marmara Sea seismic gap area to better understand earthquake hazards in the Greater Istanbul area, 6) Continuing development of models for afterslip and distributed viscoelastic deformation for the earthquake cycle. We are keeping close contact with MIT colleagues (Brad Hager, and Eric Hetland) who are developing models for S. California and for the earthquake cycle in general (Hetland, 2006). In addition, our Turkish partners at the Marmara Research Center have undertaken repeat, micro-gravity measurements at the MAGNET sites and have provided us estimates of gravity change during the period 2003 - 2005.

  4. Do moderate magnitude earthquakes generate seismically induced ground effects? The case study of the M w = 5.16, 29th December 2013 Matese earthquake (southern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Valente, Ettore; Ascione, A.; Ciotoli, G.; Cozzolino, M.; Porfido, S.; Sciarra, A.

    2018-03-01

    Seismically induced ground effects characterize moderate to high magnitude seismic events, whereas they are not so common during seismic sequences of low to moderate magnitude. A low to moderate magnitude seismic sequence with a M w = 5.16 ± 0.07 main event occurred from December 2013 to February 2014 in the Matese ridge area, in the southern Apennines mountain chain. In the epicentral area of the M w = 5.16 main event, which happened on December 29th 2013 in the southeastern part of the Matese ridge, field surveys combined with information from local people and reports allowed the recognition of several earthquake-induced ground effects. Such ground effects include landslides, hydrological variations in local springs, gas flux, and a flame that was observed around the main shock epicentre. A coseismic rupture was identified in the SW fault scarp of a small-sized intermontane basin (Mt. Airola basin). To detect the nature of the coseismic rupture, detail scale geological and geomorphological investigations, combined with geoelectrical and soil gas prospections, were carried out. Such a multidisciplinary study, besides allowing reconstruction of the surface and subsurface architecture of the Mt. Airola basin, and suggesting the occurrence of an active fault at the SW boundary of such basin, points to the gravitational nature of the coseismic ground rupture. Based on typology and spatial distribution of the ground effects, an intensity I = VII-VIII is estimated for the M w = 5.16 earthquake according to the ESI-07 scale, which affected an area of at least 90 km2.

  5. Describing earthquakes potential through mountain building processes: an example within Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhang, Huai; Shi, Yaolin; Mary, Baptiste; Wang, Liangshu

    2016-04-01

    How to reconcile earthquake activities, for instance, the distributions of large-great event rupture areas and the partitioning of seismic-aseismic slips on the subduction interface, into geological mountain building period is critical in seismotectonics. In this paper, we try to scope this issue within a typical and special continental collisional mountain wedge within Himalayas across the 2015 Mw7.8 Nepal Himalaya earth- quake area. Based on the Critical Coulomb Wedge (CCW) theory, we show the possible predictions of large-great earthquake rupture locations by retrieving refined evolutionary sequences with clear boundary of coulomb wedge and creeping path inferred from interseismic deformation pattern along the megathrust-Main Himalaya Thrust (MHT). Due to the well-known thrusting architecture with constraints on the distribution of main exhumation zone and of the key evolutionary nodes, reasonable and refined (with 500 yr interval) thrusting sequences are retrieved by applying sequential limit analysis (SLA). We also use an illustration method-'G' gram to localize the relative positions of each fault within the tectonic wedge. Our model results show that at the early stage, during the initial wedge accumulation period, because of the small size of mountain wedge, there's no large earthquakes happens in this period. Whereas, in the following stage, the wedge is growing outward with occasionally out-of-sequence thrusting, four thrusting clusters (thrusting 'families') are clarified on the basis of the spatio-temporal distributions in the mountain wedge. Thrust family 4, located in the hinterland of the mountain wedge, absorbed the least amount of the total convergence, with no large earthquakes occurrence in this stage, contributing to the emplacement of the Greater Himalayan Complex. The slips absorbed by the remnant three thrust families result in large-great earthquakes rupturing in the Sub-Himalaya, Lesser Himalaya, and the front of Higher Himalaya. The

  6. Testimonies to the L'Aquila earthquake (2009) and to the L'Aquila process

    NASA Astrophysics Data System (ADS)

    Kalenda, Pavel; Nemec, Vaclav

    2014-05-01

    Lot of confusions, misinformation, false solidarity, efforts to misuse geoethics and other unethical activities in favour of the top Italian seismologists responsible for a bad and superficial evaluation of the situation 6 days prior to the earthquake - that is a general characteristics for the whole period of 5 years separating us from the horrible morning of April 6, 2009 in L'Aquila with 309 human victims. The first author of this presentation as a seismologist had unusual opportunity to visit the unfortunate city in April 2009. He got all "first-hand" information that a real scientifically based prediction did exist already for some shocks in the area on March 29 and 30, 2009. The author of the prediction Gianpaolo Giuliani was obliged to stop any public information diffused by means of internet. A new prediction was known to him on March 31 - in the day when the "Commission of Great Risks" offered a public assurance that any immediate earthquake can be practically excluded. In reality the members of the commission completely ignored such a prediction declaring it as a false alarm of "somebody" (even without using the name of Giuliani). The observations by Giuliani were of high quality from the scientific point of view. G. Giuliani predicted L'Aquila earthquake in the professional way - for the first time during many years of observations. The anomalies, which preceded L'Aquila earthquake were detected on many places in Europe in the same time. The question is, what locality would be signed as potential focal area, if G. Giuliani would know the other observations in Europe. The deformation (and other) anomalies are observable before almost all of global M8 earthquakes. Earthquakes are preceded by deformation and are predictable. The testimony of the second author is based on many unfortunate personal experiences with representatives of the INGV Rome and their supporters from India and even Australia. In July 2010, prosecutor Fabio Picuti charged the Commission

  7. Rupture processes of the 2013-2014 Minab earthquake sequence, Iran

    NASA Astrophysics Data System (ADS)

    Kintner, Jonas A.; Ammon, Charles J.; Cleveland, K. Michael; Herman, Matthew

    2018-06-01

    We constrain epicentroid locations, magnitudes and depths of moderate-magnitude earthquakes in the 2013-2014 Minab sequence using surface-wave cross-correlations, surface-wave spectra and teleseismic body-wave modelling. We estimate precise relative locations of 54 Mw ≥ 3.8 earthquakes using 48 409 teleseismic, intermediate-period Rayleigh and Love-wave cross-correlation measurements. To reduce significant regional biases in our relative locations, we shift the relative locations to align the Mw 6.2 main-shock centroid to a location derived from an independent InSAR fault model. Our relocations suggest that the events lie along a roughly east-west trend that is consistent with the faulting geometry in the GCMT catalogue. The results support previous studies that suggest the sequence consists of left-lateral strain release, but better defines the main-shock fault length and shows that most of the Mw ≥ 5.0 aftershocks occurred on one or two similarly oriented structures. We also show that aftershock activity migrated westwards along strike, away from the main shock, suggesting that Coulomb stress transfer played a role in the fault failure. We estimate the magnitudes of the relocated events using surface-wave cross-correlation amplitudes and find good agreement with the GCMT moment magnitudes for the larger events and underestimation of small-event size by catalogue MS. In addition to clarifying details of the Minab sequence, the results demonstrate that even in tectonically complex regions, relative relocation using teleseismic surface waves greatly improves the precision of relative earthquake epicentroid locations and can facilitate detailed tectonic analyses of remote earthquake sequences.

  8. Global Positioning System data collection, processing, and analysis conducted by the U.S. Geological Survey Earthquake Hazards Program

    Murray, Jessica R.; Svarc, Jerry L.

    2017-01-01

    The U.S. Geological Survey Earthquake Science Center collects and processes Global Positioning System (GPS) data throughout the western United States to measure crustal deformation related to earthquakes and tectonic processes as part of a long‐term program of research and monitoring. Here, we outline data collection procedures and present the GPS dataset built through repeated temporary deployments since 1992. This dataset consists of observations at ∼1950 locations. In addition, this article details our data processing and analysis procedures, which consist of the following. We process the raw data collected through temporary deployments, in addition to data from continuously operating western U.S. GPS stations operated by multiple agencies, using the GIPSY software package to obtain position time series. Subsequently, we align the positions to a common reference frame, determine the optimal parameters for a temporally correlated noise model, and apply this noise model when carrying out time‐series analysis to derive deformation measures, including constant interseismic velocities, coseismic offsets, and transient postseismic motion.

  9. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  10. D.C. - ARC plasma generator for nonequilibrium plasmachemical processes

    NASA Astrophysics Data System (ADS)

    Kvaltin, J.

    1990-06-01

    The analysis of conditions for generation of nonequilibrium plasma to plasmachemical processes is made and the design of d.c.-arc plasma generator on the base of integral criterion is suggested. The measurement of potentials on the plasma column of that generator is presented.

  11. Rupture process of the 2009 L'Aquila, central Italy, earthquake, from the separate and joint inversion of Strong Motion, GPS and DInSAR data.

    NASA Astrophysics Data System (ADS)

    Cirella, A.; Piatanesi, A.; Tinti, E.; Chini, M.; Cocco, M.

    2012-04-01

    In this study, we investigate the rupture history of the April 6th 2009 (Mw 6.1) L'Aquila normal faulting earthquake by using a nonlinear inversion of strong motion, GPS and DInSAR data. We use a two-stage non-linear inversion technique. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage the algorithm performs a statistical analysis of the ensemble providing us the best-fitting model, the average model, the associated standard deviation and coefficient of variation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. The application to the 2009 L'Aquila main-shock shows that both the separate and joint inversion solutions reveal a complex rupture process and a heterogeneous slip distribution. Slip is concentrated in two main asperities: a smaller shallow patch of slip located up-dip from the hypocenter and a second deeper and larger asperity located southeastward along strike direction. The key feature of the source process emerging from our inverted models concerns the rupture history, which is characterized by two distinct stages. The first stage begins with rupture initiation and with a modest moment release lasting nearly 0.9 seconds, which is followed by a sharp increase in slip velocity and rupture speed located 2 km up-dip from the nucleation. During this first stage the rupture front propagated up-dip from the hypocenter at relatively high (˜ 4.0 km/s), but still sub-shear, rupture velocity. The second stage starts nearly 2 seconds after nucleation and it is characterized by the along strike rupture propagation. The largest and deeper asperity fails during this stage of the rupture process. The rupture velocity is larger in the up

  12. Estimation of source processes of the 2016 Kumamoto earthquakes from strong motion waveforms

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Suzuki, W.; Aoi, S.; Sekiguchi, H.

    2016-12-01

    In this study, we estimated the source processes for two large events of the 2016 Kumamoto earthquakes (the M7.3 event at 1:25 JST on April 16, 2016 and the M6.5 event at 21:26 JST on April 14, 2016) from strong motion waveforms using multiple-time-window linear waveform inversion (Hartzell and Heaton 1983; Sekiguchi et al. 2000). Based on the observations of surface ruptures, the spatial distribution of aftershocks, and the geodetic data, a realistic curved fault model was developed for the source-process analysis of the M7.3 event. The source model obtained for the M7.3 event with a seismic moment of 5.5 × 1019 Nm (Mw 7.1) had two significant ruptures. One rupture propagated toward the northeastern shallow region at 4 s after rupture initiation, and continued with large slips to approximately 16 s. This rupture caused a large slip region with a peak slip of 3.8 m that was located 10-30 km northeast of the hypocenter and reached the caldera of Mt. Aso. The contribution of the large slip region to the seismic waveforms was large at many stations. Another rupture propagated toward the surface from the hypocenter at 2-6 s, and then propagated toward the northeast along the near surface at 6-10 s. This rupture largely contributed to the seismic waveforms at the stations south of the fault and close to the hypocenter. A comparison with the results obtained using a single fault plane model demonstrate that the use of the curved fault model led to improved waveform fit at the stations south of the fault. The extent of the large near-surface slips in this source model for the M7.3 event is roughly consistent with the extent of the observed large surface ruptures. The source model obtained for the M6.5 event with a seismic moment of 1.7 × 1018 Nm (Mw 6.1) had large slips in the region around the hypocenter and in the shallow region north-northeast of the hypocenter, both of which had a maximum slip of 0.7 m. The rupture of the M6.5 event propagated from the former region

  13. Earthquakes and Volcanic Processes at San Miguel Volcano, El Salvador, Determined from a Small, Temporary Seismic Network

    NASA Astrophysics Data System (ADS)

    Hernandez, S.; Schiek, C. G.; Zeiler, C. P.; Velasco, A. A.; Hurtado, J. M.

    2008-12-01

    The San Miguel volcano lies within the Central American volcanic chain in eastern El Salvador. The volcano has experienced at least 29 eruptions with Volcano Explosivity Index (VEI) of 2. Since 1970, however, eruptions have decreased in intensity to an average of VEI 1, with the most recent eruption occurring in 2002. Eruptions at San Miguel volcano consist mostly of central vent and phreatic eruptions. A critical challenge related to the explosive nature of this volcano is to understand the relationships between precursory surface deformation, earthquake activity, and volcanic activity. In this project, we seek to determine sub-surface structures within and near the volcano, relate the local deformation to these structures, and better understand the hazard that the volcano presents in the region. To accomplish these goals, we deployed a six station, broadband seismic network around San Miguel volcano in collaboration with researchers from Servicio Nacional de Estudios Territoriales (SNET). This network operated continuously from 23 March 2007 to 15 January 2008 and had a high data recovery rate. The data were processed to determine earthquake locations, magnitudes, and, for some of the larger events, focal mechanisms. We obtained high precision locations using a double-difference approach and identified at least 25 events near the volcano. Ongoing analysis will seek to identify earthquake types (e.g., long period, tectonic, and hybrid events) that occurred in the vicinity of San Miguel volcano. These results will be combined with radar interferometric measurements of surface deformation in order to determine the relationship between surface and subsurface processes at the volcano.

  14. Exploring Thermal Shear Runaway as a triggering process for Intermediate-Depth Earthquakes: Overview of the Northern Chilean seismic nest.

    NASA Astrophysics Data System (ADS)

    Derode, B.; Riquelme, S.; Ruiz, J. A.; Leyton, F.; Campos, J. A.; Delouis, B.

    2014-12-01

    The intermediate depth earthquakes of high moment magnitude (Mw ≥ 8) in Chile have had a relative greater impact in terms of damage, injuries and deaths, than thrust type ones with similar magnitude (e.g. 1939, 1950, 1965, 1997, 2003, and 2005). Some of them have been studied in details, showing paucity of aftershocks, down-dip tensional focal mechanisms, high stress-drop and subhorizontal rupture. At present, their physical mechanism remains unclear because ambient temperatures and pressures are expected to lead to ductile, rather than brittle deformation. We examine source characteristics of more than 100 intraslab intermediate depth earthquakes using local and regional waveforms data obtained from broadband and accelerometers stations of IPOC network in northern Chile. With this high quality database, we estimated the total radiated energy from the energy flux carried by P and S waves integrating this flux in time and space, and evaluated their seismic moment directly from both spectral amplitude and near-field waveform inversion methods. We estimated the three parameters Ea, τa and M0 because their estimates entail no model dependence. Interestingly, the seismic nest studied using near-field re-location and only data from stations close to the source (D<250km) appears to not be homogeneous in terms of depths, displaying unusual seismic gaps along the Wadati-Benioff zone. Moreover, as confirmed by other studies of intermediate-depth earthquakes in subduction zones, very high stress drop ( >> 10MPa) and low radiation efficiency in this seismic nest were found. These unusual seismic parameter values can be interpreted as the expression of the loose of a big quantity of the emitted energy by heating processes during the rupture. Although it remains difficult to conclude about the processes of seismic nucleation, we present here results that seem to support a thermal weakening behavior of the fault zones and the existence of thermal stress processes like thermal

  15. PAGER--Rapid assessment of an earthquake?s impact

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  16. Damaging earthquakes: A scientific laboratory

    Hays, Walter W.; ,

    1996-01-01

    This paper reviews the principal lessons learned from multidisciplinary postearthquake investigations of damaging earthquakes throughout the world during the past 15 years. The unique laboratory provided by a damaging earthquake in culturally different but tectonically similar regions of the world has increased fundamental understanding of earthquake processes, added perishable scientific, technical, and socioeconomic data to the knowledge base, and led to changes in public policies and professional practices for earthquake loss reduction.

  17. Performance of Irikura's Recipe Rupture Model Generator in Earthquake Ground Motion Simulations as Implemented in the Graves and Pitarka Hybrid Approach.

    SciT

    Pitarka, A.

    We analyzed the performance of the Irikura and Miyake (2011) (IM2011) asperity-­ based kinematic rupture model generator, as implemented in the hybrid broadband ground-­motion simulation methodology of Graves and Pitarka (2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0 -­ 20Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with bothmore » IM2011 and the rupture generation method of Graves and Pitarka (2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-­West2 Project (NGA-­West2) ground-­motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-­fault distances (<12km) and at long periods (>1s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1 – 3 sec where the IM2011 motions are about 20 – 30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1 – 3 second bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study.« less

  18. Coherence-generating power of quantum dephasing processes

    NASA Astrophysics Data System (ADS)

    Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo

    2018-03-01

    We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.

  19. Toward a physics-based rate and state friction law for earthquake nucleation processes in fault zones with granular gouge

    NASA Astrophysics Data System (ADS)

    Ferdowsi, B.; Rubin, A. M.

    2017-12-01

    Numerical simulations of earthquake nucleation rely on constitutive rate and state evolution laws to model earthquake initiation and propagation processes. The response of different state evolution laws to large velocity increases is an important feature of these constitutive relations that can significantly change the style of earthquake nucleation in numerical models. However, currently there is not a rigorous understanding of the physical origins of the response of bare rock or gouge-filled fault zones to large velocity increases. This in turn hinders our ability to design physics-based friction laws that can appropriately describe those responses. We here argue that most fault zones form a granular gouge after an initial shearing phase and that it is the behavior of the gouge layer that controls the fault friction. We perform numerical experiments of a confined sheared granular gouge under a range of confining stresses and driving velocities relevant to fault zones and apply 1-3 order of magnitude velocity steps to explore dynamical behavior of the system from grain- to macro-scales. We compare our numerical observations with experimental data from biaxial double-direct-shear fault gouge experiments under equivalent loading and driving conditions. Our intention is to first investigate the degree to which these numerical experiments, with Hertzian normal and Coulomb friction laws at the grain-grain contact scale and without any time-dependent plasticity, can reproduce experimental fault gouge behavior. We next compare the behavior observed in numerical experiments with predictions of the Dieterich (Aging) and Ruina (Slip) friction laws. Finally, the numerical observations at the grain and meso-scales will be used for designing a rate and state evolution law that takes into account recent advances in rheology of granular systems, including local and non-local effects, for a wide range of shear rates and slow and fast deformation regimes of the fault gouge.

  20. Pre-earthquake magnetic pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J.; Freund, F.

    2015-08-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  1. Postseismic deformation following the 2015 Mw 7.8 Gorkha earthquake and the distribution of brittle and ductile crustal processes beneath Nepal

    NASA Astrophysics Data System (ADS)

    Moore, J. D. P.; Barbot, S.; Peng, D.; Yu, H.; Qiu, Q.; Wang, T.; Masuti, S.; Dauwels, J.; Lindsey, E. O.; Tang, C. H.; Feng, L.; Wei, S.; Hsu, Y. J.; Nanjundiah, P.; Lambert, V.; Antoine, S.

    2017-12-01

    Studies of geodetic data across the earthquake cycle indicate that a wide range of mechanisms contribute to cycles of stress buildup and relaxation. Both on-fault rate and state friction and off-fault rheologies can contribute to the observed deformation; in particular, during the postseismic transient phase of the earthquake cycle. We present a novel approach to simulate on-fault and off-fault deformation simultaneously using analytical Green's functions for distributed deformation at depth [Barbot, Moore and Lambert., 2017] and surface tractions, within an integral framework [Lambert & Barbot, 2016]. This allows us to jointly explore dynamic frictional properties on the fault, and the plastic properties of the bulk rocks (including grain size and water distribution) in the lower crust with low computational cost, whilst taking into account contributions from topography and a surface approximation for gravity. These displacement and stress Green's functions can be used for both forward and inverse modelling of distributed shear, where the calculated strain-rates can be converted to effective viscosities. Here, we draw insight from the postseismic geodetic observations following the 2015 Mw 7.8 Gorkha earthquake. We forward model afterslip using rate and state friction on the megathrust geometry with the two ramp-décollement system presented by Hubbard et al., (2016) and viscoelastic relaxation using recent experimentally derived flow laws with transient rheology and the thermal structure from Cattin et al. (2001). Multivariate posterior probability density functions for model parameters are generated by incorporating the forward model evaluation and comparison to geodetic observations into a Gaussian copula framework. In particular, we find that no afterslip occurred on the up-dip portion of the fault beneath Kathmandu. A combination of viscoelastic relaxation and down-dip afterslip is required to fit the data, taking into account the bi-directional coupling

  2. Steam generation by combustion of processed waste fats

    SciT

    Pudel, F.; Lengenfeld, P.

    1993-12-31

    The use of specially processed waste fats as a fuel oil substitute offers, at attractive costs, an environmentally friendly alternative to conventional disposal like refuse incineration or deposition. For that purpose the processed fat is mixed with EL fuel oil and burned in a standard steam generation plant equipped with special accessories. The measured emission values of the combustion processes are very low.

  3. Earthquake Emergency Education in Dushanbe, Tajikistan

    ERIC Educational Resources Information Center

    Mohadjer, Solmaz; Bendick, Rebecca; Halvorson, Sarah J.; Saydullaev, Umed; Hojiboev, Orifjon; Stickler, Christine; Adam, Zachary R.

    2010-01-01

    We developed a middle school earthquake science and hazards curriculum to promote earthquake awareness to students in the Central Asian country of Tajikistan. These materials include pre- and post-assessment activities, six science activities describing physical processes related to earthquakes, five activities on earthquake hazards and mitigation…

  4. Shallow observatory installations unravel earthquake processes in the Nankai accretionary complex (IODP Expedition 365)

    NASA Astrophysics Data System (ADS)

    Kopf, A.; Saffer, D. M.; Toczko, S.

    2016-12-01

    NanTroSEIZE is a multi-expedition IODP project to investigate fault mechanics and seismogenesis along the Nankai Trough subduction zone through direct sampling, in situ measurements, and long-term monitoring. Recent Expedition 365 had three primary objectives at a major splay thrust fault (termed the "megasplay") in the forearc: (1) retrieval of a temporary observatory (termed a GeniusPlug) that has been monitoring temperature and pore pressure within the fault zone at 400 meters below seafloor for since 2010; (2) deployment of a complex long-term borehole monitoring system (LTBMS) across the same fault; and (3) coring of key sections of the hanging wall, deformation zone and footwall of the shallow megasplay. Expedition 365 achieved its primary monitoring objectives, including recovery of the GeniusPlug with a >5-year record of pressure and temperature conditions, geochemical samples, and its in situ microbial colonization experiment; and installation of the LTBMS. The pressure records from the GeniusPlug include high-quality records of formation and seafloor responses to multiple fault slip events, including the 2011 M9 Tohoku and the 1 April Mie-ken Nanto-oki M6 earthquakes. The geochemical sampling coils yielded in situ pore fluids from the fault zone, and microbes were successfully cultivated from the colonization unit. The LTBMS incorporates multi-level pore pressure sensing, a volumetric strainmeter, tiltmeter, geophone, broadband seismometer, accelerometer, and thermistor string. This multi-level hole completion was meanwhile connected to the DONET seafloor cabled network for tsunami early warning and earthquake monitoring. Coring the shallow megasplay site in the Nankai forearc recovered ca. 100m of material across the fault zone, which contained indurated silty clay with occasional ash layers and sedimentary breccias in the hangingwall and siltstones in the footwall of the megasplay. The mudstones show different degrees of deformation spanning from

  5. The neural component-process architecture of endogenously generated emotion

    PubMed Central

    Kanske, Philipp; Singer, Tania

    2017-01-01

    Abstract Despite the ubiquity of endogenous emotions and their role in both resilience and pathology, the processes supporting their generation are largely unknown. We propose a neural component process model of endogenous generation of emotion (EGE) and test it in two functional magnetic resonance imaging (fMRI) experiments (N = 32/293) where participants generated and regulated positive and negative emotions based on internal representations, usin self-chosen generation methods. EGE activated nodes of salience (SN), default mode (DMN) and frontoparietal control (FPCN) networks. Component processes implemented by these networks were established by investigating their functional associations, activation dynamics and integration. SN activation correlated with subjective affect, with midbrain nodes exclusively distinguishing between positive and negative affect intensity, showing dynamics consistent generation of core affect. Dorsomedial DMN, together with ventral anterior insula, formed a pathway supporting multiple generation methods, with activation dynamics suggesting it is involved in the generation of elaborated experiential representations. SN and DMN both coupled to left frontal FPCN which in turn was associated with both subjective affect and representation formation, consistent with FPCN supporting the executive coordination of the generation process. These results provide a foundation for research into endogenous emotion in normal, pathological and optimal function. PMID:27522089

  6. The exposure of Sydney (Australia) to earthquake-generated tsunamis, storms and sea level rise: a probabilistic multi-hazard approach

    PubMed Central

    Dall'Osso, F.; Dominey-Howes, D.; Moore, C.; Summerhayes, S.; Withycombe, G.

    2014-01-01

    Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney. PMID:25492514

  7. The exposure of Sydney (Australia) to earthquake-generated tsunamis, storms and sea level rise: a probabilistic multi-hazard approach.

    PubMed

    Dall'Osso, F; Dominey-Howes, D; Moore, C; Summerhayes, S; Withycombe, G

    2014-12-10

    Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney.

  8. Processing and analysis techniques involving in-vessel material generation

    DOEpatents

    Schabron, John F [Laramie, WY; Rovani, Jr., Joseph F.

    2011-01-25

    In at least one embodiment, the inventive technology relates to in-vessel generation of a material from a solution of interest as part of a processing and/or analysis operation. Preferred embodiments of the in-vessel material generation (e.g., in-vessel solid material generation) include precipitation; in certain embodiments, analysis and/or processing of the solution of interest may include dissolution of the material, perhaps as part of a successive dissolution protocol using solvents of increasing ability to dissolve. Applications include, but are by no means limited to estimation of a coking onset and solution (e.g., oil) fractionating.

  9. Processing and analysis techniques involving in-vessel material generation

    DOEpatents

    Schabron, John F [Laramie, WY; Rovani, Jr., Joseph F.

    2012-09-25

    In at least one embodiment, the inventive technology relates to in-vessel generation of a material from a solution of interest as part of a processing and/or analysis operation. Preferred embodiments of the in-vessel material generation (e.g., in-vessel solid material generation) include precipitation; in certain embodiments, analysis and/or processing of the solution of interest may include dissolution of the material, perhaps as part of a successive dissolution protocol using solvents of increasing ability to dissolve. Applications include, but are by no means limited to estimation of a coking onset and solution (e.g., oil) fractionating.

  10. What is the fault that has generated the earthquake on 8 September 1905 in Calabria, Italy? Source models compared by tsunami data

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano; Loreto, Maria Filomena; Facchin, Lorenzo

    2014-05-01

    The earthquake that the 8 September 1905 hit Calabria in southern Italy was the second Italian earthquake for magnitude in the last century. It destroyed many villages along the coast of the Gulf of Sant'Eufemia, caused more than 500 fatalities and has also generated a tsunami with non-destructive effects. The historical reports tell us that the tsunami caused major damage in the villages of Briatico, Bivona, Pizzo and Vibo Marina, located in the south part of the Sant'Eufemia gulf and minor damage to Tropea and to Scalea, this one being village located about 100 km far from the epicenter. Other reports include accounts of fishermen at sea during the tsunami. Further, the tsunami is visible on tide gauge records in Messina, Sicily, in Naples and in Civitavecchia, a harbour located to the north of Rome (Platania, 1907) In spite of the attention devoted by researchers to this case, until now, like for other tsunamigenic Italian earthquakes, the genetic structure of the earthquake is still not identified and debate is still open. In this context, tsunami simulations can provide contributions useful to find the source model more consistent with observational data. This approach was already followed by Piatanesi and Tinti (2002), who carried out numerical simulations of tsunamis from a number of local sources. In the last decade studies on this seismogenic area were int ensified resulting in new estimates for the 1905 earthquake magnitude (7.1 according to the CPTI11 catalogue) and in the suggestion of new source models. By using an improved tsunami simulation model, more accurate bathymetry data, this work tests the source models investigated by Piatanesi and Tinti (2002) and in addition the new fault models proposed by Cucci and Tertulliani (2010) and by Loreto et al. (2013). The simulations of the tsunami are calculated by means of the code, UBO-TSUFD, that solves the linear equations of Navier-Stokes in approximation of shallow water with the finite

  11. Current challenges for pre-earthquake electromagnetic emissions: shedding light from micro-scale plastic flow, granular packings, phase transitions and self-affinity notion of fracture process

    NASA Astrophysics Data System (ADS)

    Eftaxias, K.; Potirakis, S. M.

    2013-10-01

    Are there credible electromagnetic (EM) potential earthquake (EQ) precursors? This a question debated in the scientific community and there may be legitimate reasons for the critical views. The negative view concerning the existence of EM potential precursors is enhanced by features that accompany their observation which are considered as paradox ones, namely, these signals: (i) are not observed at the time of EQs occurrence and during the aftershock period, (ii) are not accompanied by large precursory strain changes, (iii) are not accompanied by simultaneous geodetic or seismological precursors and (iv) their traceability is considered problematic. In this work, the detected candidate EM potential precursors are studied through a shift in thinking towards the basic science findings relative to granular packings, micron-scale plastic flow, interface depinning, fracture size effects, concepts drawn from phase transitions, self-affine notion of fracture and faulting process, universal features of fracture surfaces, recent high quality laboratory studies, theoretical models and numerical simulations. We try to contribute to the establishment of strict criteria for the definition of an emerged EM anomaly as a possibly EQ-related one, and to the explanation of potential precursory EM features which have been considered as paradoxes. A three-stage model for EQ generation by means of pre-EQ fracture-induced EM emissions is proposed. The claim that the observed EM potential precursors may permit a real-time and step-by-step monitoring of the EQ generation is tested.

  12. Complex rupture process of the Mw 7.8, 2016, Kaikoura earthquake, New Zealand, and its aftershock sequence

    NASA Astrophysics Data System (ADS)

    Cesca, S.; Zhang, Y.; Mouslopoulou, V.; Wang, R.; Saul, J.; Savage, M.; Heimann, S.; Kufner, S.-K.; Oncken, O.; Dahm, T.

    2017-11-01

    The M7.8 Kaikoura Earthquake that struck the northeastern South Island, New Zealand, on November 14, 2016 (local time), is one of the largest ever instrumentally recorded earthquakes in New Zealand. It occurred at the southern termination of the Hikurangi subduction margin, where the subducting Pacific Plate transitions into the dextral Alpine transform fault. The earthquake produced significant distributed uplift along the north-eastern part of the South Island, reaching a peak amplitude of ∼8 m, which was accompanied by large (≥10 m) horizontal coseismic displacements at the ground surface along discrete active faults. The seismic waveforms' expression of the main shock indicate a complex rupture process. Early automated centroid moment tensor solutions indicated a strong non-double-couple term, which supports a complex rupture involving multiple faults. The hypocentral distribution of aftershocks, which appears diffuse over a broad region, clusters spatially along lineaments with different orientations. A key question of global interest is to shed light on the mechanism with which such a complex rupture occurred, and whether the underlying plate-interface was involved in the rupture. The consequences for seismic hazard of such a distributed, shallow faulting is important to be assessed. We perform a broad seismological analysis, combining regional and teleseismic seismograms, GPS and InSAR, to determine the rupture process of the main shock and moment tensors of 118 aftershocks down to Mw 4.2. The joint interpretation of the main rupture and aftershock sequence allow reconstruction of the geometry, and suggests sequential activation and slip distribution on at least three major active fault domains. We find that the rupture nucleated as a weak strike-slip event along the Humps Fault, which progressively propagated northward onto a shallow reverse fault, where most of the seismic moment was released, before it triggered slip on a second set of strike

  13. Effects of Process and Outcome Accountability on Idea Generation.

    PubMed

    Häusser, Jan Alexander; Frisch, Johanna Ute; Wanzel, Stella; Schulz-Hardt, Stefan

    2017-07-01

    Previous research on the effects of outcome and process accountability on decision making has neglected the preceding phase of idea generation. We conducted a 2 (outcome accountability: yes vs. no) × 2 (process accountability: yes vs. no) experiment (N = 147) to test the effects of accountability on quantity and quality of generated ideas in a product design task. Furthermore, we examined potential negative side effects of accountability (i.e., stress and lengthened decision making). We found that (a) outcome accountability had a negative effect on quantity of ideas and (b) process accountability extended the idea generation process. Furthermore, any type of accountability (c) had a negative effect on uniqueness of ideas, (d) did not affect the quality of the idea that was selected, and (e) increased stress. Moreover, the negative effect of accountability on uniqueness of ideas was mediated by stress.

  14. Earthquake chemical precursors in groundwater: a review

    NASA Astrophysics Data System (ADS)

    Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.

    2018-03-01

    We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

  15. Category Cued Recall Evokes a Generate-Recognize Retrieval Process

    PubMed Central

    Hunt, R. Reed; Smith, Rebekah E.; Toth, Jeffrey P.

    2015-01-01

    The experiments reported here were designed to replicate and extend McCabe, Roediger, and Karpicke’s (2011) finding that retrieval in category cued recall involves both controlled and automatic processes. The extension entailed identifying whether distinctive encoding affected one or both of these two processes. The first experiment successfully replicated McCabe et al., but the second, which added a critical baseline condition, produced data inconsistent with a two independent process model of recall. The third experiment provided evidence that retrieval in category cued recall reflects a generate-recognize strategy, with the effect of distinctive processing being localized to recognition. Overall, the data suggest that category cued recall evokes a generate-recognize retrieval strategy and that the sub-processes underlying this strategy can be dissociated as a function of distinctive versus relational encoding processes. PMID:26280355

  16. Source rupture process of the 2016 Kaikoura, New Zealand earthquake estimated from the kinematic waveform inversion of strong-motion data

    NASA Astrophysics Data System (ADS)

    Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo

    2018-03-01

    On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.

  17. The ferrosilicon process for the generation of hydrogen

    NASA Technical Reports Server (NTRS)

    Weaver, E R; Berry, W M; Bohnson, V L; Gordon, B D

    1920-01-01

    Report describes the generation of hydrogen by the reaction between ferrosilicon, sodium hydroxide, and water. This method known as the ferrosilicon method is especially adapted for use in the military field because of the relatively small size and low cost of the generator required to produce hydrogen at a rapid rate, the small operating force required, and the fact that no power is used except the small amount required to operate the stirring and pumping machinery. These advantages make it possible to quickly generate sufficient hydrogen to fill a balloon with a generator which can be transported on a motor truck. This report gives a summary of the details of the ferrosilicon process and a critical examination of the means which are necessary in order to make the process successful.

  18. Category cued recall evokes a generate-recognize retrieval process.

    PubMed

    Hunt, R Reed; Smith, Rebekah E; Toth, Jeffrey P

    2016-03-01

    The experiments reported here were designed to replicate and extend McCabe, Roediger, and Karpicke's (2011) finding that retrieval in category cued recall involves both controlled and automatic processes. The extension entailed identifying whether distinctive encoding affected 1 or both of these 2 processes. The first experiment successfully replicated McCabe et al., but the second, which added a critical baseline condition, produced data inconsistent with a 2 independent process model of recall. The third experiment provided evidence that retrieval in category cued recall reflects a generate-recognize strategy, with the effect of distinctive processing being localized to recognition. Overall, the data suggest that category cued recall evokes a generate-recognize retrieval strategy and that the subprocesses underlying this strategy can be dissociated as a function of distinctive versus relational encoding processes. (c) 2016 APA, all rights reserved).

  19. Rupture process of the 2016 Mw 7.8 Ecuador earthquake from joint inversion of InSAR data and teleseismic P waveforms

    NASA Astrophysics Data System (ADS)

    Yi, Lei; Xu, Caijun; Wen, Yangmao; Zhang, Xu; Jiang, Guoyan

    2018-01-01

    The 2016 Ecuador earthquake ruptured the Ecuador-Colombia subduction interface where several historic megathrust earthquakes had occurred. In order to determine a detailed rupture model, Interferometric Synthetic Aperture Radar (InSAR) images and teleseismic data sets were objectively weighted by using a modified Akaika's Bayesian Information Criterion (ABIC) method to jointly invert for the rupture process of the earthquake. In modeling the rupture process, a constrained waveform length method, unlike the traditional subjective selected waveform length method, was used since the lengths of inverted waveforms were strictly constrained by the rupture velocity and rise time (the slip duration time). The optimal rupture velocity and rise time of the earthquake were estimated from grid search, which were determined to be 2.0 km/s and 20 s, respectively. The inverted model shows that the event is dominated by thrust movement and the released moment is 5.75 × 1020 Nm (Mw 7.77). The slip distribution extends southward along the Ecuador coast line in an elongated stripe at a depth between 10 and 25 km. The slip model is composed of two asperities and slipped over 4 m. The source time function is approximate 80 s that separated into two segments corresponding to the two asperities. The small magnitude of the slip occurred in the updip section of the fault plane resulted in small tsunami waves that were verified by observations near the coast. We suggest a possible situation that the rupture zone of the 2016 earthquake is likely not overlapped with that of the 1942 earthquake.

  20. Volcanotectonic earthquakes induced by propagating dikes

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2016-04-01

    Volcanotectonic earthquakes are of high frequency and mostly generated by slip on faults. During chamber expansion/contraction earthquakes are distribution in the chamber roof. Following magma-chamber rupture and dike injection, however, earthquakes tend to concentrate around the dike and follow its propagation path, resulting in an earthquake swarm characterised by a number of earthquakes of similar magnitudes. I distinguish between two basic processes by which propagating dikes induce earthquakes. One is due to stress concentration in the process zone at the tip of the dike, the other relates to stresses induced in the walls and surrounding rocks on either side of the dike. As to the first process, some earthquakes generated at the dike tip are related to pure extension fracturing as the tip advances and the dike-path forms. Formation of pure extension fractures normally induces non-double couple earthquakes. There is also shear fracturing in the process zone, however, particularly normal faulting, which produces double-couple earthquakes. The second process relates primarily to slip on existing fractures in the host rock induced by the driving pressure of the propagating dike. Such pressures easily reach 5-20 MPa and induce compressive and shear stresses in the adjacent host rock, which already contains numerous fractures (mainly joints) of different attitudes. In piles of lava flows or sedimentary beds the original joints are primarily vertical and horizontal. Similarly, the contacts between the layers/beds are originally horizontal. As the layers/beds become buried, the joints and contacts become gradually tilted so that the joints and contacts become oblique to the horizontal compressive stress induced by a driving pressure of the (vertical) dike. Also, most of the hexagonal (or pentagonal) columnar joints in the lava flows are, from the beginning, oblique to an intrusive sheet of any attitude. Consequently, the joints and contacts function as potential shear

  1. Provision of Earthquake Early Warning to the General Public and Necessary Preparatory Process in Japan

    NASA Astrophysics Data System (ADS)

    Tsukada, S.; Kamigaichi, O.; Saito, M.; Takeda, K.; Shimoyama, T.; Nakamura, K.; Kiyomoto, M.; Watanabe, Y.

    2007-12-01

    Earthquake early warning of JMA is to enable advance countermeasures to the strong motion disaster by providing expected seismic intensity and arrival time of the strong motion, as well as estimated hypocenter parameters, before the S wave arrival. However, due to its very short available time period, it is essential to well publicize the principle and technical limit of EEW, and proper actions to be taken when it is seen or heard, to utilize EEW effectively without causing unnecessary confusion. Accordingly, JMA decided to provide EEW in two steps. Namely, JMA started to provide EEW to a limited number of users who understand the technical limit of EEW and can utilize it effectively, such as for automatic control from August 2006. At that moment, EEW was not well known to the general public, so JMA started to provide it to the general public in October 2007, after publicizing the principle and proper actions to be taken. EEWs are issued basically several times for one earthquake improving the accuracy as available data increases as time passes, securing the promptness of the first issuance at the same time. On line connected computer can utilize such multiply issued information for automatic control. But, when they are transmitted to a public, it is impossible to respond properly, and also it is impossible to transmit all by characters and voice. So, JMA considered the issuance criterion and contents of EEW when it is issued to the general public to meet the following conditions. 1) Should be issued on the best timing, to avoid the false alarm, to secure the promptness as much as possible, and to make the revised issuance as few as possible. 2) Should be issued when really a strong motion is expected, and it should be made clear where the safety actions should be taken. As a result, - Issuance criterion : when the maximum seismic intensity 5 lower(JMA scale) or over is expected by using seismic records from more than one station. - EEW contents : Origin time

  2. Imbricated slip rate processes during slow slip transients imaged by low-frequency earthquakes

    NASA Astrophysics Data System (ADS)

    Lengliné, O.; Frank, W.; Marsan, D.; Ampuero, J. P.

    2017-12-01

    Low Frequency Earthquakes (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a model where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip rate from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip rates from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip rate decay with time after the passage of the SSE front possibly as t-1/4. They are composed of multiple short term transients with steeper slip rate decay as t-α with α between 1.4 and 2. We also find that the maximum slip rate of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.

  3. Imbricated slip rate processes during slow slip transients imaged by low-frequency earthquakes

    NASA Astrophysics Data System (ADS)

    Lengliné, O.; Frank, W. B.; Marsan, D.; Ampuero, J.-P.

    2017-10-01

    Low Frequency Earthquakes (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a model where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip rate from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip rates from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip rate decay with time after the passage of the SSE front possibly as t - 1 / 4. They are composed of multiple short term transients with steeper slip rate decay as t-α with α between 1.4 and 2. We also find that the maximum slip rate of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.

  4. Influence of winding construction on starter-generator thermal processes

    NASA Astrophysics Data System (ADS)

    Grachev, P. Yu; Bazarov, A. A.; Tabachinskiy, A. S.

    2018-01-01

    Dynamic processes in starter-generators features high winding are overcurrent. It can lead to insulation overheating and fault operation mode. For hybrid and electric vehicles, new high efficiency construction of induction machines windings is proposed. Stator thermal processes need be considered in the most difficult operation modes. The article describes construction features of new compact stator windings, electromagnetic and thermal models of processes in stator windings and explains the influence of innovative construction on thermal processes. Models are based on finite element method.

  5. Combined effects of tectonic and landslide-generated Tsunami Runup at Seward, Alaska during the Mw 9.2 1964 earthquake

    Suleimani, E.; Nicolsky, D.J.; Haeussler, Peter J.; Hansen, R.

    2011-01-01

    We apply a recently developed and validated numerical model of tsunami propagation and runup to study the inundation of Resurrection Bay and the town of Seward by the 1964 Alaska tsunami. Seward was hit by both tectonic and landslide-generated tsunami waves during the Mw 9.2 1964 mega thrust earthquake. The earthquake triggered a series of submarine mass failures around the fjord, which resulted in land sliding of part of the coastline into the water, along with the loss of the port facilities. These submarine mass failures generated local waves in the bay within 5 min of the beginning of strong ground motion. Recent studies estimate the total volume of underwater slide material that moved in Resurrection Bay to be about 211 million m3 (Haeussler et al. in Submarine mass movements and their consequences, pp 269-278, 2007). The first tectonic tsunami wave arrived in Resurrection Bay about 30 min after the main shock and was about the same height as the local landslide-generated waves. Our previous numerical study, which focused only on the local land slide generated waves in Resurrection Bay, demonstrated that they were produced by a number of different slope failures, and estimated relative contributions of different submarine slide complexes into tsunami amplitudes (Suleimani et al. in Pure Appl Geophys 166:131-152, 2009). This work extends the previous study by calculating tsunami inundation in Resurrection Bay caused by the combined impact of landslide-generated waves and the tectonic tsunami, and comparing the composite inundation area with observations. To simulate landslide tsunami runup in Seward, we use a viscous slide model of Jiang and LeBlond (J Phys Oceanogr 24(3):559-572, 1994) coupled with nonlinear shallow water equations. The input data set includes a high resolution multibeam bathymetry and LIDAR topography grid of Resurrection Bay, and an initial thickness of slide material based on pre- and post-earthquake bathymetry difference maps. For

  6. Automated event generation for loop-induced processes

    DOE PAGES

    Hirschi, Valentin; Mattelaer, Olivier

    2015-10-22

    We present the first fully automated implementation of cross-section computation and event generation for loop-induced processes. This work is integrated in the MadGraph5_aMC@NLO framework. We describe the optimisations implemented at the level of the matrix element evaluation, phase space integration and event generation allowing for the simulation of large multiplicity loop-induced processes. Along with some selected differential observables, we illustrate our results with a table showing inclusive cross-sections for all loop-induced hadronic scattering processes with up to three final states in the SM as well as for some relevant 2 → 4 processes. Furthermore, many of these are computed heremore » for the first time.« less

  7. Tsunami generation and associated waves in the water column and seabed due to an asymmetric earthquake motion within an anisotropic substratum

    NASA Astrophysics Data System (ADS)

    Bagheri, Amirhossein; Greenhalgh, Stewart; Khojasteh, Ali; Rahimian, Mohammad; Attarnejad, Reza

    2016-10-01

    In this paper, closed-form integral expressions are derived to describe how surface gravity waves (tsunamis) are generated when general asymmetric ground displacement (due to earthquake rupturing), involving both horizontal and vertical components of motion, occurs at arbitrary depth within the interior of an anisotropic subsea solid beneath the ocean. In addition, we compute the resultant hydrodynamic pressure within the seawater and the elastic wavefield within the seabed at any position. The method of potential functions and an integral transform approach, accompanied by a special contour integration scheme, are adopted to handle the equations of motion and produce the numerical results. The formulation accounts for any number of possible acoustic-gravity modes and is valid for both shallow and deep water situations as well as for any focal depth of the earthquake source. Phase and group velocity dispersion curves are developed for surface gravity (tsunami mode), acoustic-gravity, Rayleigh, and Scholte waves. Several asymptotic cases which arise from the general analysis are discussed and compared to existing solutions. The role of effective parameters such as hypocenter location and frequency of excitation is examined and illustrated through several figures which show the propagation pattern in the vertical and horizontal directions. Attention is directed to the unexpected contribution from the horizontal ground motion. The results have important application in several fields such as tsunami hazard prediction, marine seismology, and offshore and coastal engineering. In a companion paper, we examine the effect of ocean stratification on the appearance and character of internal and surface gravity waves.

  8. Solution-Processed Carbon Nanotube True Random Number Generator.

    PubMed

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  9. Cytotoxicity of Doxycycline Effluent Generated by the Fenton Process

    PubMed Central

    Borghi, Alexandre Augusto; Stephano, Marco Antônio; Monteiro de Souza, Paula; Alves Palma, Mauri Sérgio

    2014-01-01

    This study aims at determining the Minimum Inhibitory Concentration with Escherichia coli ATCC 25922 and cytotoxicity to L929 cells (ATCC CCL-1) of the waste generated by doxycycline degradation by the Fenton process. This process has shown promise in this treatment thanks mainly to the fact that the waste did not show any relevant inhibitory effect on the test organism and no cytotoxicity to L-929 cells, thus demonstrating that the antibiotic properties were inactivated. PMID:25379532

  10. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  11. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictable earthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

  12. Frictional Heat Generation and Slip Duration Estimated From Micro-fault in an Exhumed Accretionary Complex and Their Relations to the Scaling Law for Slow Earthquakes

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Morita, K.; Okubo, M.; Hamada, Y.; Lin, W.; Hirose, T.; Kitamura, M.

    2015-12-01

    Fault motion has been estimated by diffusion pattern of frictional heating recorded in geology (e.g., Fulton et al., 2012). The same record in deeper subduction plate interface can be observed from micro-faults in an exhumed accretionary complex. In this study, we focused on a micro-fault within the Cretaceous Shimanto Belt, SW Japan to estimate fault motion from the frictional heating diffusion pattern. A carbonaceous material concentrated layer (CMCL) with ~2m of thickness is observed in study area. Some micro-faults cut the CMCL. Thickness of a fault is about 3.7mm. Injection veins and dilatant fractures were observed in thin sections, suggesting that the high fluid pressure was existed. Samples with 10cm long were collected to measure distribution of vitrinite reflectance (Ro) as a function of distance from the center of micro-fault. Ro of host rock was ~1.0%. Diffusion pattern was detected decreasing in Ro from ~1.2%-~1.1%. Characteristic diffusion distance is ~4-~9cm. We conducted grid search to find the optimal frictional heat generation per unit area (Q, the product of friction coefficient, normal stress and slip velocity) and slip duration (t) to fit the diffusion pattern. Thermal diffusivity (0.98*10-8m2/s) and thermal conductivity (2.0 W/mK) were measured. In the result, 2000-2500J/m2 of Q and 63000-126000s of t were estimated. Moment magnitudes (M0) of slow earthquakes (slow EQs) follow a scaling law with slip duration and its dimension is different from that for normal earthquakes (normal EQ) (Ide et al., 2007). The slip duration estimated in this study (~104-~105s) consistent with 4-5 of M0, never fit to the scaling law for normal EQ. Heat generation can be inverted from 4-5 of M0, corresponding with ~108-~1011J, which is consistent with rupture area of 105-108m2 in this study. The comparisons in heat generation and slip duration between geological measurements and geophysical remote observations give us the estimation of rupture area, M0, and

  13. Generating and Using Examples in the Proving Process

    ERIC Educational Resources Information Center

    Sandefur, J.; Mason, J.; Stylianides, G. J.; Watson, A.

    2013-01-01

    We report on our analysis of data from a dataset of 26 videotapes of university students working in groups of 2 and 3 on different proving problems. Our aim is to understand the role of example generation in the proving process, focusing on deliberate changes in representation and symbol manipulation. We suggest and illustrate four aspects of…

  14. AUTOMATED LITERATURE PROCESSING HANDLING AND ANALYSIS SYSTEM--FIRST GENERATION.

    ERIC Educational Resources Information Center

    Redstone Scientific Information Center, Redstone Arsenal, AL.

    THE REPORT PRESENTS A SUMMARY OF THE DEVELOPMENT AND THE CHARACTERISTICS OF THE FIRST GENERATION OF THE AUTOMATED LITERATURE PROCESSING, HANDLING AND ANALYSIS (ALPHA-1) SYSTEM. DESCRIPTIONS OF THE COMPUTER TECHNOLOGY OF ALPHA-1 AND THE USE OF THIS AUTOMATED LIBRARY TECHNIQUE ARE PRESENTED. EACH OF THE SUBSYSTEMS AND MODULES NOW IN OPERATION ARE…

  15. Study Of The Rupture Process Of The 2015 Mw7.8 Izu-Bonin Earthquake And Its Implication To Deep-Focus Earthquake Genesis.

    NASA Astrophysics Data System (ADS)

    Jian, P. R.; Hung, S. H.; Meng, L.

    2015-12-01

    On May 30, 2015, a major Mw7.8 great deep earthquake occurred at the base of the mantle transition zone (MTZ), approximately 680 km deep within the Pacific Plate which subducts westward under the Philippine Sea Plate along the Izu-Bonin trench. A global P wave tomographic image indicates that a tabular high-velocity structure delineated by ~1% faster than the ambient mantle plunges nearly vertical to a depth at most 600 km and afterword flattens and stagnates within the MTZ. Almost all the deep earthquakes in this region are clustered inside this fast anomaly corresponding to the cold core of the subducting slab. Those occurring at depth between 400~500 km close to the hinge of the bending slab show down-dip compressional focal mechanisms and reflect episodic release of compressive strain accumulated in the slab. The 2015 deep event, however, separated from the others, occurred uniquely near the base of the lithosphere with a down-dip extension mechanism, consistent with the notion that the outer portion of the folded slab experiences extensional bending stress. Here we perform a 3D MUSIC back-projection (BP) rupture imaging for this isolated deep event using P and pP waveforms individually from the European, North American and Australian array data. By integrating P- and pP- BP images in frequencies of 0.1-1 Hz obtained from three array observations with different azimuth, we first ascertain the most possible fault plan is the SW-dipping subhorizontal one. Then, from back-projecting higher frequency waveforms at 1-1.5 Hz onto the obtained fault plane, we find the rupture initially propagates slowly along the strike (SW-direction), and makes a turn to the NNW-direction at ~12s after the onset of rupture. The MUSIC psudospectrum over totally 20s rupture duration reveals that most seismic energy radiation takes place at the initial 8s of the first rupture along the strike, 10-15 km long region, while the along-updip second rupture lasting for 6-10s has a rupture

  16. Source Rupture Process of the 2016 Kumamoto Prefecture, Japan, Earthquake Derived from Near-Source Strong-Motion Records

    NASA Astrophysics Data System (ADS)

    Zheng, A.; Zhang, W.

    2016-12-01

    On 15 April, 2016 the great earthquake with magnitude Mw7.1 occurred in Kumamoto prefecture, Japan. The focal mechanism solution released by F-net located the hypocenter at 130.7630°E, 32.7545°N, at a depth of 12.45 km, and the strike, dip, and the rake angle of the fault were N226°E, 84° and -142° respectively. The epicenter distribution and focal mechanisms of aftershocks implied the mechanism of the mainshock might have changed in the source rupture process, thus a single focal mechanism was not enough to explain the observed data adequately. In this study, based on the inversion result of GNSS and InSAR surface deformation with active structures for reference, we construct a finite fault model with focal mechanism changes, and derive the source rupture process by multi-time-window linear waveform inversion method using the strong-motion data (0.05 1.0Hz) obtained by K-NET and KiK-net of Japan. Our result shows that the Kumamoto earthquake is a right-lateral strike slipping rupture event along the Futagawa-Hinagu fault zone, and the seismogenic fault is divided into a northern segment and a southern one. The strike and the dip of the northern segment are N235°E, 60° respectively. And for the southern one, they are N205°E, 72° respectively. The depth range of the fault model is consistent with the depth distribution of aftershocks, and the slip on the fault plane mainly concentrate on the northern segment, in which the maximum slip is about 7.9 meter. The rupture process of the whole fault continues for approximately 18-sec, and the total seismic moment released is 5.47×1019N·m (Mw 7.1). In addition, the essential feature of the distribution of PGV and PGA synthesized by the inversion result is similar to that of observed PGA and seismic intensity.

  17. Use of microearthquakes in the study of the mechanics of earthquake generation along the San Andreas fault in central California

    Eaton, J.P.; Lee, W.H.K.; Pakiser, L.C.

    1970-01-01

    A small, dense network of independently recording portable seismograph stations was used to delineate the slip surface associated with the 1966 Parkfield-Cholame earthquake by precise three dimensional mapping of the hypocenters of its aftershocks. The aftershocks were concentrated in a very narrow vertical zone beneath or immediately adjacent to the zone of surf ace fracturing that accompanied the main shock. Focal depths ranged from less than 1 km to a maximum of 15 km. The same type of portable network was used to study microearthquakes associated with an actively creeping section of the San Andreas fault south of Hollister during the summer of 1967. Microearthquake activity during the 6-week operation of this network was dominated by aftershocks of a magnitude-4 earthquake that occurred within the network near Bear Valley on July 23. Most of the aftershocks were concentrated in an equidimensional region about 2 1 2km across that contained the hypocenter of the main shock. The zone of the concentrated aftershocks was centered near the middle of the rift zone at a depth of about 3 1 2km. Hypocenters of other aftershocks outlined a 25 km long zone of activity beneath the actively creeping strand of the fault and extending from the surface to a depth of about 13 km. A continuing study of microearthquakes along the San Andreas, Hayward, and Calaveras faults between Hollister and San Francisco has been under way for about 2 years. The permanent telemetered network constructed for this purpose has grown from about 30 stations in early 1968 to about 45 stations in late 1969. Microearthquakes between Hollister and San Francisco are heavily concentrated in narrow, nearly vertical zones along sections of the Sargent, San Andreas, and Calaveras faults. Focal depths range from less than 1 km to about 14 km. ?? 1970.

  18. Mines and mineral processing facilities in the vicinity of the March 11, 2011, earthquake in northern Honshu, Japan

    Menzie, W. David; Baker, Michael S.; Bleiwas, Donald I.; Kuo, Chin

    2011-01-01

    U.S. Geological Survey data indicate that the area affected by the March 11, 2011, magnitude 9.0 earthquake and associated tsunami is home to nine cement plants, eight iodine plants, four iron and steel plants, four limestone mines, three copper refineries, two gold refineries, two lead refineries, two zinc refineries, one titanium dioxide plant, and one titanium sponge processing facility. These facilities have the capacity to produce the following percentages of the world's nonfuel mineral production: 25 percent of iodine, 10 percent of titanium sponge (metal), 3 percent of refined zinc, 2.5 percent of refined copper, and 1.4 percent of steel. In addition, the nine cement plants contribute about one-third of Japan's cement annual production. The iodine is a byproduct from production of natural gas at the Miniami Kanto gas field, east of Tokyo in Chiba Prefecture. Japan is the world's second leading (after Chile) producer of iodine, which is processed in seven nearby facilities.

  19. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  20. Declarative Business Process Modelling and the Generation of ERP Systems

    NASA Astrophysics Data System (ADS)

    Schultz-Møller, Nicholas Poul; Hølmer, Christian; Hansen, Michael R.

    We present an approach to the construction of Enterprise Resource Planning (ERP) Systems, which is based on the Resources, Events and Agents (REA) ontology. This framework deals with processes involving exchange and flow of resources in a declarative, graphically-based manner describing what the major entities are rather than how they engage in computations. We show how to develop a domain-specific language on the basis of REA, and a tool which automatically can generate running web-applications. A main contribution is a proof-of-concept showing that business-domain experts can generate their own applications without worrying about implementation details.

  1. A viscous flow analysis for the tip vortex generation process

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.; Briley, W. R.

    1979-01-01

    A three dimensional, forward-marching, viscous flow analysis is applied to the tip vortex generation problem. The equations include a streamwise momentum equation, a streamwise vorticity equation, a continuity equation, and a secondary flow stream function equation. The numerical method used combines a consistently split linearized scheme for parabolic equations with a scalar iterative ADI scheme for elliptic equations. The analysis is used to identify the source of the tip vortex generation process, as well as to obtain detailed flow results for a rectangular planform wing immersed in a high Reynolds number free stream at 6 degree incidence.

  2. Pre-earthquake Magnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J. A.; Freund, F. T.

    2015-12-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  3. Analog earthquakes

    SciT

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  4. The Importance of Long Wavelength Processes in Generating Landscapes

    NASA Astrophysics Data System (ADS)

    Roberts, Gareth G.; White, Nicky

    2017-04-01

    The processes responsible for generating landscapes observed on Earth and elsewhere are poorly understood. For example, the relative importance of long (>10 km) and short wavelength erosional processes in determining the evolution of topography is debated. Much work has focused on developing an observational and theoretical framework for evolution of longitudinal river profiles (i.e. elevation as a function of streamwise distance), which probably sets the pace of erosion in low-mid latitude continents. A large number of geomorphic studies emphasis the importance of short wavelength processes in sculpting topography (e.g. waterfall migration, interaction of biota and the solid Earth, hill slope evolution). However, it is not clear if these processes scale to generate topography observed at longer (>10 km) wavelengths. At wavelengths of tens to thousands of kilometers topography is generated by modification of the lithosphere (e.g. shortening, extension, flexure) and by sub-plate processes (e.g. dynamic support). Inversion of drainage patterns suggests that uplift rate histories can be reliably recovered at these long wavelengths using simple erosional models (e.g. stream power). Calculated uplift and erosion rate histories are insensitive to short wavelength (<10 km) or rapid (<100 ka) environmental changes (e.g. biota, precipitation, lithology). One way to examine the relative importance of short and long wavelength processes in generating topography is to transform river profiles into distance-frequency space. We calculate the wavelet power spectrum of a suite of river profiles and examine their spectral content. Big rivers in North America (e.g. Colorado, Rio Grande) and Africa (e.g. Niger, Orange) have a red noise spectrum (i.e. power inversely proportional to wavenumber-squared) at wavelengths > 100 km. More than 90% of river profile elevations in our inventory are determined at these wavelengths. At shorter wavelengths spectra more closely resemble pink noise

  5. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  6. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  7. Evaluating the Process of Generating a Clinical Trial Protocol

    PubMed Central

    Franciosi, Lui G.; Butterfield, Noam N.; MacLeod, Bernard A.

    2002-01-01

    The research protocol is the principal document in the conduct of a clinical trial. Its generation requires knowledge about the research problem, the potential experimental confounders, and the relevant Good Clinical Practices for conducting the trial. However, such information is not always available to authors during the writing process. A checklist of over 80 items has been developed to better understand the considerations made by authors in generating a protocol. It is based on the most cited requirements for designing and implementing the randomised controlled trial. Items are categorised according to the trial's research question, experimental design, statistics, ethics, and standard operating procedures. This quality assessment tool evaluates the extent that a generated protocol deviates from the best-planned clinical trial.

  8. Generation of low-temperature air plasma for food processing

    NASA Astrophysics Data System (ADS)

    Stepanova, Olga; Demidova, Maria; Astafiev, Alexander; Pinchuk, Mikhail; Balkir, Pinar; Turantas, Fulya

    2015-11-01

    The project is aimed at developing a physical and technical foundation of generating plasma with low gas temperature at atmospheric pressure for food industry needs. As known, plasma has an antimicrobial effect on the numerous types of microorganisms, including those that cause food spoilage. In this work an original experimental setup has been developed for the treatment of different foods. It is based on initiating corona or dielectric-barrier discharge in a chamber filled with ambient air in combination with a certain helium admixture. The experimental setup provides various conditions of discharge generation (including discharge gap geometry, supply voltage, velocity of gas flow, content of helium admixture in air and working pressure) and allows for the measurement of the electrical discharge parameters. Some recommendations on choosing optimal conditions of discharge generation for experiments on plasma food processing are developed.

  9. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  10. Earthquake processes in the Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, region 1954-1959

    NASA Astrophysics Data System (ADS)

    Doser, Diane I.

    1986-11-01

    The 1954 Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, sequence produced the most extensive pattern of surface faults in the intermountain region in historic time. Five earthquakes of M>6.0 occurred during the first 6 months of the sequence, including the December 16, 1954, Fairview Peak (M = 7.1) and Dixie Valley (M = 6.8) earthquakes. Three 5.5≤M≤6.5 earthquakes occurred in the region in 1959, but none exhibited surface faulting. The results of the modeling suggest that the M>6.5 earthquakes of this sequence are complex events best fit by multiple source-time functions. Although the observed surface displacements for the July and August 1954 events showed only dip-slip motion, the fault plane solutions and waveform modeling suggest the earthquakes had significant components of right-lateral strike-slip motion (rakes of -135° to -145°). All of the earthquakes occurred along high-angle faults with dips of 40° to 70°. Seismic moments for individual subevents of the sequence range from 8.0 × 1017 to 2.5 × 1019 N m. Stress drops for the subevents, including the Fairview Peak subevents, were between 0.7 and 6.0 MPa.

  11. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  12. Rupture process of 2016, 25 January earthquake, Alboran Sea (South Spain, Mw= 6.4) and aftershocks series

    NASA Astrophysics Data System (ADS)

    Buforn, E.; Pro, C.; del Fresno, C.; Cantavella, J.; Sanz de Galdeano, C.; Udias, A.

    2016-12-01

    We have studied the rupture process of the 25 January 2016 earthquake (Mw =6.4) occurred in South Spain in the Alboran Sea. Main shock, foreshock and largest aftershocks (Mw =4.5) have been relocated using the NonLinLoc algorithm. Results obtained show a NE-SW distribution of foci at shallow depth (less than 15 km). For main shock, focal mechanism has been obtained from slip inversion over the rupture plane of teleseismic data, corresponding to left-lateral strike-slip motion. The rupture starts at 7 km depth and it propagates upward with a complex source time function. In order to obtain a more detailed source time function and to validate the results obtained from teleseismic data, we have used the Empirical Green Functions method (EGF) at regional distances. Finally, results of the directivity effect from teleseismic Rayleigh waves and the EGF method, are consistent with a rupture propagation to the NE. These results are interpreted in terms of the main geological features in the region.

  13. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Grenander, Sven; Evensen, Ken

    2011-01-01

    center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

  14. Injection-induced earthquakes

    Ellsworth, William L.

    2013-01-01

    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  15. P-wave fault-plane solutions and the generation of surface waves by earthquakes in the western United States

    NASA Astrophysics Data System (ADS)

    Patton, Howard J.

    1985-08-01

    Surface waves recorded at regional distances are used to study the source mechanisms of seven earthquakes in the western United States with magnitudes between 4.3 and 5.5. The source mechanisms of events in or on the margins of the Basin and Range show T-axis with an azimuth of N85°W +/- 16° and a plunge of 12° +/- 16°. Of the seven events, four have P-wave solutions that are inconsistent with surface-wave observations. Azimuths of the T-axis obtained from the surface-wave mechanisms and from the P-wave solutions differ by up to 45°. These events have dip-slip or oblique-slip mechanisms, and the source depths for three of the events are 5 km or less. Their source mechanisms and small magnitudes make identification of the P-wave first motion difficult due to poor signal-to-noise ratio of the initial P-wave and close arrivals of pP or sP with significant amplitude. We suggest that mis-identification of the P-wave first motion and distortion of the body-wave ray paths due to non-planar structure were sources of error in determining the nodal planes for these events.

  16. Nonlinear processes generated by supercritical tidal flow in shallow straits

    NASA Astrophysics Data System (ADS)

    Bordois, Lucie; Auclair, Francis; Paci, Alexandre; Dossmann, Yvan; Nguyen, Cyril

    2017-06-01

    Numerical experiments have been carried out using a nonhydrostatic and non-Boussinesq regional oceanic circulation model to investigate the nonlinear processes generated by supercritical tidal flow in shallow straits. Our approach relies on idealized direct numerical simulations inspired by oceanic observations. By analyzing a large set of simulations, a regime diagram is proposed for the nonlinear processes generated in the lee of these straits. The results show that the topography shape of the strait plays a crucial role in the formation of internal solitary waves (ISWs) and in the occurrence of local breaking events. Both of these nonlinear processes are important turbulence producing phenomena. The topographic control, observed in mode 1 ISW formation in previous studies [Y. Dossmann, F. Auclair, and A. Paci, "Topographically induced internal solitary waves in a pycnocline: Primary generation and topographic control," Phys. Fluids 25, 066601 (2013) and Y. Dossmann et al., "Topographically induced internal solitary waves in a pycnocline: Ultrasonic probes and stereo-correlation measurements," Phys. Fluids 26, 056601 (2014)], is clearly reproducible for mode-2 ISW above shallow straits. Strong plunging breaking events are observed above "narrow" straits (straits with a width less than mode 1 wavelength) when the fluid velocity exceeds the local mode 1 wave speed. These results are a step towards future works on vertical mixing quantification and localization around complex strait areas.

  17. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  18. Rupture processes of the 2012 September 5 Mw 7.6 Nicoya, Costa Rica earthquake constrained by improved geodetic and seismological observations

    NASA Astrophysics Data System (ADS)

    Liu, Chengli; Zheng, Yong; Xiong, Xiong; Wang, Rongjiang; López, Allan; Li, Jun

    2015-10-01

    On 2012 September 5, the anticipated interplate thrust earthquake ruptured beneath the Nicoya peninsula in northwestern Costa Rica close to the Middle America trench, with a magnitude Mw 7.6. Extensive co-seismic observations were provided by dense near-field strong ground motion, Global Positioning Systems (GPS) networks and teleseismic recordings from global seismic networks. The wealthy data sets available for the 2012 Mw 7.6 Nicoya earthquake provide a unique opportunity to investigate the details of the rupture process of this earthquake. By implementing a non-linear joint inversion with high-rate GPS waveform, more static GPS offsets, strong-motion data and teleseismic body waveform, we obtained a robust and accurate rupture model of the 2012 Mw 7.6 Nicoya earthquake. The earthquake is dominantly a pure thrust component with a maximum slip of 3.5 m, and the main large slip patch is located below the hypocentre, spanning ˜50 km along dip and ˜110 km along strike. The static stress drop is about 3.4 MPa. The total seismic moment of our preferred model is 3.46 × 1020 N m, which gives Mw = 7.6. Due to the fast rupture velocity, most of the seismic moment was released within 70 s. The largest slip patch directly overlaps the interseismic locked region identified by geodetic observations and extends downdip to the intersection with the upper plate Moho. We also find that there is a complementary pattern between the distribution of aftershocks and the co-seismic rupture; most aftershocks locate in the crust of the upper plate and are possibly induced by the stress change caused by the large slip patch.

  19. Asia-Pacific Region Global Earthquake and Volcanic Eruption Risk Management (G-EVER) project and a next-generation real-time volcano hazard assessment system

    NASA Astrophysics Data System (ADS)

    Takarada, S.

    2012-12-01

    The first Workshop of Asia-Pacific Region Global Earthquake and Volcanic Eruption Risk Management (G-EVER1) was held in Tsukuba, Ibaraki Prefecture, Japan from February 23 to 24, 2012. The workshop focused on the formulation of strategies to reduce the risks of disasters worldwide caused by the occurrence of earthquakes, tsunamis, and volcanic eruptions. More than 150 participants attended the workshop. During the workshop, the G-EVER1 accord was approved by the participants. The Accord consists of 10 recommendations like enhancing collaboration, sharing of resources, and making information about the risks of earthquakes and volcanic eruptions freely available and understandable. The G-EVER Hub website (http://g-ever.org) was established to promote the exchange of information and knowledge among the Asia-Pacific countries. Several G-EVER Working Groups and Task Forces were proposed. One of the working groups was tasked to make the next-generation real-time volcano hazard assessment system. The next-generation volcano hazard assessment system is useful for volcanic eruption prediction, risk assessment, and evacuation at various eruption stages. The assessment system is planned to be developed based on volcanic eruption scenario datasets, volcanic eruption database, and numerical simulations. Defining volcanic eruption scenarios based on precursor phenomena leading up to major eruptions of active volcanoes is quite important for the future prediction of volcanic eruptions. Compiling volcanic eruption scenarios after a major eruption is also important. A high quality volcanic eruption database, which contains compilations of eruption dates, volumes, and styles, is important for the next-generation volcano hazard assessment system. The volcanic eruption database is developed based on past eruption results, which only represent a subset of possible future scenarios. Hence, different distributions from the previous deposits are mainly observed due to the differences in

  20. Rupture process and strong ground motions of the 2007 Niigataken Chuetsu-Oki earthquake -Directivity pulses striking the Kashiwazaki-Kariwa Nuclear Power Plant-

    NASA Astrophysics Data System (ADS)

    Irikura, K.; Kagawa, T.; Miyakoshi, K.; Kurahashi, S.

    2007-12-01

    The Niigataken Chuetsu-Oki earthquake occurred on July 16, 2007, northwest-off Kashiwazaki in Niigata Prefecture, Japan, causing severe damages of ten people dead, about 1300 injured, about 1000 collapsed houses and major lifelines suspended. In particular, strong ground motions from the earthquake struck the Kashiwazaki-Kariwa nuclear power plant (hereafter KKNPP), triggering a fire at an electric transformer and other problems such as leakage of water containing radioactive materials into air and the sea, although the radioactivity levels of the releases are as low as those of the radiation which normal citizens would receive from the natural environment in a year. The source mechanism of this earthquake is a reverse fault, but whether it is the NE-SW strike and NW dip or the SW-NE strike and SE dip are still controversial from the aftershock distribution and geological surveys near the source. Results of the rupture processes inverted by using the GPS and SAR data, tsunami data and teleseismic data so far did not succeed in determining which fault planes moved. Strong ground motions were recorded at about 390 stations by the K-NET of NIED including the stations very close to the source area. There was the KKNPP which is probably one of buildings and facilities closest to the source area. They have their own strong motion network with 22 three-components' accelerographs locating at ground-surface, underground, buildings and basements of reactors. The PGA attenuation-distance relationships made setting the fault plane estimated from the GPS data generally follow the empirical relations in Japan, for example, Fukushima and Tanaka (1990) and Si and Midorikawa (1999), even if either fault plane, SE dip or NW dip, is assumed. However, the strong ground motions in the site of the KKNPP had very large accelerations and velocities more than those expected from the empirical relations. The surface motions there had the PGA of more than 1200 gals and even underground

  1. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  2. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  3. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  4. Dynamic modeling of stress evolution and crustal deformation associated with the seismogenic process of the 2008 Mw7.9 Wenchuan, China earthquake

    NASA Astrophysics Data System (ADS)

    Tao, W.; Wan, Y.; Wang, K.; Zeng, Y.; Shen, Z.

    2009-12-01

    We model stress evolution and crustal deformation associated with the seismogenic process of the 2008 Mw7.9 Wenchuan, China earthquake. This earthquake ruptured a section of the Longmen Shan fault, which is a listric fault separating the eastern Tibetan plateau at northwest from the Sichuan basin at southeast, with a predominantly thrust component for the southwest section of the fault. Different driving mechanisms have been proposed for the fault system: either by channel flow in the lower crust, or lateral push from the eastern Tibetan plateau on the entire crust. A 2-D finite element model is devised to simulate the tectonic process and test validities of the models. A layered viscoelastic media is prescribed, and constrained from seismological and other geophysical investigation results, characterized with a weak lower crust in the western Tibetan plateau and a strong lower crust in the Sichuan basin. The interseismic, coseismic, and postseismic deformation processes are modeled, under constraints of GPS observed deformation fields during these time periods. Our preliminary result shows concentration of elastic strain energy accumulated mainly surrounding the lower part of the locking section of the seismogenic fault during the interseismic time period, implying larger stress drop at the lower part than at the upper part of the locking section of the fault, assuming a total release of the elastic stress accumulation during an earthquake. The coseismic stress change is the largest at the near field in the hanging-wall, offering explanation of extensive aftershock activities occurred in the region after the Wenchuan mainshock. A more complete picture of stress evolution and interaction between the upper and lower crust in the process during an earthquake cycle will be presented at the meeting.

  5. Thermal Radiation Anomalies Associated with Major Earthquakes

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Pulinets, Sergey; Kafatos, Menas C.; Taylor, Patrick

    2017-01-01

    Recent developments of remote sensing methods for Earth satellite data analysis contribute to our understanding of earthquake related thermal anomalies. It was realized that the thermal heat fluxes over areas of earthquake preparation is a result of air ionization by radon (and other gases) and consequent water vapor condensation on newly formed ions. Latent heat (LH) is released as a result of this process and leads to the formation of local thermal radiation anomalies (TRA) known as OLR (outgoing Longwave radiation, Ouzounov et al, 2007). We compare the LH energy, obtained by integrating surface latent heat flux (SLHF) over the area and time with released energies associated with these events. Extended studies of the TRA using the data from the most recent major earthquakes allowed establishing the main morphological features. It was also established that the TRA are the part of more complex chain of the short-term pre-earthquake generation, which is explained within the framework of a lithosphere-atmosphere coupling processes.

  6. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  7. Characterisation of aerosol combustible mixtures generated using condensation process

    NASA Astrophysics Data System (ADS)

    Saat, Aminuddin; Dutta, Nilabza; Wahid, Mazlan A.

    2012-06-01

    An accidental release of a liquid flammable substance might be formed as an aerosol (droplet and vapour mixture). This phenomenon might be due to high pressure sprays, pressurised liquid leaks and through condensation when hot vapour is rapidly cooled. Such phenomena require a fundamental investigation of mixture characterisation prior to any subsequent process such as evaporation and combustion. This paper describes characterisation study of droplet and vapour mixtures generated in a fan stirred vessel using condensation technique. Aerosol of isooctane mixtures were generated by expansion from initially a premixed gaseous fuel-air mixture. The distribution of droplets within the mixture was characterised using laser diagnostics. Nearly monosized droplet clouds were generated and the droplet diameter was defined as a function of expansion time. The effect of changes in pressure, temperature, fuel-air fraction and expansion ratio on droplet diameter was evaluated. It is shown that aerosol generation by expansion was influenced by the initial pressure and temperature, equivalence ratio and expansion rates. All these parameters affected the onset of condensation which in turn affected the variation in droplet diameter.

  8. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  9. Make an Earthquake: Ground Shaking!

    ERIC Educational Resources Information Center

    Savasci, Funda

    2011-01-01

    The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…

  10. Earthquake Facts

    MedlinePlus

    ... recordings of large earthquakes, scientists built large spring-pendulum seismometers in an attempt to record the long- ... are moving away from one another. The first “pendulum seismoscope” to measure the shaking of the ground ...

  11. Catalog of earthquakes along the San Andreas fault system in Central California: January-March, 1972

    Wesson, R.L.; Bennett, R.E.; Meagher, K.L.

    1973-01-01

    Numerous small earthquakes occur each day in the Coast Ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period January - March, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b,c,d). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1,718 earthquakes in Central California. Of particular interest is a sequence of earthquakes in the Bear Valley area which contained single shocks with local magnitudes of S.O and 4.6. Earthquakes from this sequence make up roughly 66% of the total and are currently the subject of an interpretative study. Arrival times at 118 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 94 are telemetered stations operated by NCER. Readings from the remaining 24 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley,have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the

  12. Catalog of earthquakes along the San Andreas fault system in Central California, April-June 1972

    Wesson, R.L.; Bennett, R.E.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period April - June, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). A catalog for the first quarter of 1972 has been prepared by Wesson and others (1972). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 910 earthquakes in Central California. A substantial portion of the earthquakes reported in this catalog represents a continuation of the sequence of earthquakes in the Bear Valley area which began in February, 1972 (Wesson and others, 1972). Arrival times at 126 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 101 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement

  13. Earthquake watch

    Hill, M.

    1976-01-01

     When the time comes that earthquakes can be predicted accurately, what shall we do with the knowledge? This was the theme of a November 1975 conference on earthquake warning and response held in San Francisco called by Assistant Secretary of the Interior Jack W. Carlson. Invited were officials of State and local governments from Alaska, California, Hawaii, Idaho, Montana, Nevada, utah, Washington, and Wyoming and representatives of the news media. 

  14. An automatic modular procedure to generate high-resolution earthquake catalogues: application to the Alto Tiberina Near Fault Observatory (TABOO), Italy.

    NASA Astrophysics Data System (ADS)

    Di Stefano, R.; Chiaraluce, L.; Valoroso, L.; Waldhauser, F.; Latorre, D.; Piccinini, D.; Tinti, E.

    2014-12-01

    The Alto Tiberina Near Fault Observatory (TABOO) in the upper Tiber Valley (northern Appennines) is a INGV research infrastructure devoted to the study of preparatory processes and deformation characteristics of the Alto Tiberina Fault (ATF), a 60 km long, low-angle normal fault active since the Quaternary. The TABOO seismic network, covering an area of 120 × 120 km, consists of 60 permanent surface and 250 m deep borehole stations equipped with 3-components, 0.5s to 120s velocimeters, and strong motion sensors. Continuous seismic recordings are transmitted in real-time to the INGV, where we set up an automatic procedure that produces high-resolution earthquakes catalogues (location, magnitudes, 1st motion polarities) in near-real-time. A sensitive event detection engine running on the continuous data stream is followed by advanced phase identification, arrival-time picking, and quality assessment algorithms (MPX). Pick weights are determined from a statistical analysis of a set of predictors designed to correctly apply an a-priori chosen weighting scheme. The MPX results are used to routinely update earthquakes catalogues based on a variety of (1D and 3D) velocity models and location techniques. We are also applying the DD-RT procedure which uses cross-correlation and double-difference methods in real-time to relocate events with high precision relative to a high-resolution background catalog. P- and S-onset and location information are used to automatically compute focal mechanisms, VP/VS variations in space and time, and periodically update 3D VP and VP/VS tomographic models. We present results from four years of operation, during which this monitoring system analyzed over 1.2 million detections and recovered ~60,000 earthquakes at a detection threshold of ML 0.5. The high-resolution information is being used to study changes in seismicity patterns and fault and rock properties along the ATF in space and time, and to elaborate ground shaking scenarios adopting

  15. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  16. Rupture Process During the Mw 8.1 2017 Chiapas Mexico Earthquake: Shallow Intraplate Normal Faulting by Slab Bending

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Yagi, Y.

    2017-12-01

    A seismic source model for the Mw 8.1 2017 Chiapas, Mexico, earthquake was constructed by kinematic waveform inversion using globally observed teleseismic waveforms, suggesting that the earthquake was a normal-faulting event on a steeply dipping plane, with the major slip concentrated around a relatively shallow depth of 28 km. The modeled rupture evolution showed unilateral, downdip propagation northwestward from the hypocenter, and the downdip width of the main rupture was restricted to less than 30 km below the slab interface, suggesting that the downdip extensional stresses due to the slab bending were the primary cause of the earthquake. The rupture front abruptly decelerated at the northwestern end of the main rupture where it intersected the subducting Tehuantepec Fracture Zone, suggesting that the fracture zone may have inhibited further rupture propagation.

  17. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  18. Rupture process of the September 12, 2007 Southern Sumatra earthquake from tsunami waveform inversion

    NASA Astrophysics Data System (ADS)

    Lorito, S.; Romano, F.; Piatanesi, A.

    2007-12-01

    The aim of this work is to infer the slip distribution and mean rupture velocity along the rupture zone of the 12 September 2007 Southern Sumatra, Indonesia from available tide-gauge records of the tsunami. We select waveforms from 12 stations, distributed along the west coast of Sumatra and in the whole Indian Ocean (11 GLOSS stations and 1 DART buoy). We assume the fault plane and the slip direction to be consistent with both the geometry of the subducting plate and the early focal mechanism solutions. Then we subdivide the fault plane into several subfaults (both along strike and down dip) and compute the corresponding Green's functions by numerical solution of the shallow water equations through a finite difference method. The slip distribution and rupture velocity are determined simultaneously by means of a simulated annealing technique. We compare the recorded and synthetic waveforms in the time domain, using a cost function that is a trade-off between the L1 and L2 norms. Preliminary synthetic checkerboard tests, using the station coverage and the sampling interval of the available data, indicate that the main features of the rupture process may be robustly inverted.

  19. Source Process of the 2007 Niigata-ken Chuetsu-oki Earthquake Derived from Near-fault Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Sekiguchi, H.; Morikawa, N.; Ozawa, T.; Kunugi, T.; Shirasaka, M.

    2007-12-01

    The 2007 Niigata-ken Chuetsu-oki earthquake occurred on July 16th, 2007, 10:13 JST. We performed a multi- time window linear waveform inversion analysis (Hartzell and Heaton, 1983) to estimate the rupture process from the near fault strong motion data of 14 stations from K-NET, KiK-net, F-net, JMA, and Niigata prefecture. The fault plane for the mainshock has not been clearly determined yet from the aftershock distribution, so that we performed two waveform inversions for north-west dipping fault (Model A) and south-east dipping fault (Model B). Their strike, dip, and rake are set to those of the moment tensor solutions by F-net. Fault plane model of 30 km length by 24 km width is set to cover aftershock distribution within 24 hours after the mainshock. Theoretical Green's functions were calculated by the discrete wavenumber method (Bouchon, 1981) and the R/T matrix method (Kennett, 1983) with the different stratified medium for each station based on the velocity structure including the information form the reflection survey and borehole logging data. Convolution of moving dislocation was introduced to represent the rupture propagation in an each subfault (Sekiguchi et al., 2002). The observed acceleration records were integrated into velocity except of F-net velocity data, and bandpass filtered between 0.1 and 1.0 Hz. We solved least-squared equation to obtain slip amount of each time window on each subfault to minimize squared residual of the waveform fitting between observed and synthetic waveforms. Both models provide moment magnitudes of 6.7. Regarding Model A, we obtained large slip in the south-west deeper part of the rupture starting point, which is close to Kashiwazaki-city. The second or third velocity pulses of observed velocity waveforms seem to be composed of slip from the asperity. Regarding Model B, we obtained large slip in the southwest shallower part of the rupture starting point, which is also close to Kashiwazaki-city. In both models, we found

  20. Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.

    PubMed

    Böttcher, Björn

    2010-12-03

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.

  1. Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond

    PubMed Central

    Böttcher, Björn

    2010-01-01

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931

  2. Fully solution-processed, transparent organic power-generating polarizer

    NASA Astrophysics Data System (ADS)

    Chou, Wei-Yu; Hsu, Fang-Chi; Chen, Yang-Fang

    2017-03-01

    We fabricate transparent organic power-generating polarizer by all solution process. Based on the conventional indium-tin-oxide-coated glass as the bottom cathode, the subsequent layers are prepared by a combination of solution processing methods. Sprayed silver nanowires film serves as the top anode and can transmit greater than 80% of the visible light with sheet resistance of 16 Ω/□. By adopting the quasi-bilayer structure for the photoactive layer composed of rubbed polymer donors to produce anisotropic optical property underneath fullerene acceptors, the finished device demonstrates a power conversion efficiency of 1.36% with unpolarized light, a dichroic ratio of 3.2, and a high short circuit current ratio of 2.6 with polarized light. Our proposed fabrication procedures of devices take into account not only the cost-effective production, but also the flexibility of devices for applying in flexible, scalable circuits to advance the development of future technology.

  3. Testing earthquake source inversion methodologies

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  4. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  5. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  6. Mathematics for generative processes: Living and non-living systems

    NASA Astrophysics Data System (ADS)

    Giannantoni, Corrado

    2006-05-01

    The traditional Differential Calculus often shows its limits when describing living systems. These in fact present such a richness of characteristics that are, in the majority of cases, much wider than the description capabilities of the usual differential equations. Such an aspect became particularly evident during the research (completed in 2001) for an appropriate formulation of Odum's Maximum Em-Power Principle (proposed by the Author as a possible Fourth Thermodynamic Principle). In fact, in such a context, the particular non-conservative Algebra, adopted to account for both Quality and quantity of generative processes, suggested we introduce a faithfully corresponding concept of "derivative" (of both integer and fractional order) to describe dynamic conditions however variable. The new concept not only succeeded in pointing out the corresponding differential bases of all the rules of Emergy Algebra, but also represented the preferential guide in order to recognize the most profound physical nature of the basic processes which mostly characterize self-organizing Systems (co-production, co-injection, inter-action, feed-back, splits, etc.).From a mathematical point of view, the most important novelties introduced by such a new approach are: (i) the derivative of any integer or fractional order can be obtained independently from the evaluation of its lower order derivatives; (ii) the exponential function plays an extremely hinge role, much more marked than in the case of traditional differential equations; (iii) wide classes of differential equations, traditionally considered as being non-linear, become "intrinsically" linear when reconsidered in terms of "incipient" derivatives; (iv) their corresponding explicit solutions can be given in terms of new classes of functions (such as "binary" and "duet" functions); (v) every solution shows a sort of "persistence of form" when representing the product generated with respect to the agents of the generating process

  7. Rupture process of the 2010 Mw 7.8 Mentawai tsunami earthquake from joint inversion of near-field hr-GPS and teleseismic body wave recordings constrained by tsunami observations

    NASA Astrophysics Data System (ADS)

    Yue, Han; Lay, Thorne; Rivera, Luis; Bai, Yefei; Yamazaki, Yoshiki; Cheung, Kwok Fai; Hill, Emma M.; Sieh, Kerry; Kongko, Widjo; Muhari, Abdul

    2014-07-01

    The 25 October 2010 Mentawai tsunami earthquake (Mw 7.8) ruptured the shallow portion of the Sunda megathrust seaward of the Mentawai Islands, offshore of Sumatra, Indonesia, generating a strong tsunami that took 509 lives. The rupture zone was updip of those of the 12 September 2007 Mw 8.5 and 7.9 underthrusting earthquakes. High-rate (1 s sampling) GPS instruments of the Sumatra GPS Array network deployed on the Mentawai Islands and Sumatra mainland recorded time-varying and static ground displacements at epicentral distances from 49 to 322 km. Azimuthally distributed tsunami recordings from two deepwater sensors and two tide gauges that have local high-resolution bathymetric information provide additional constraints on the source process. Finite-fault rupture models, obtained by joint inversion of the high-rate (hr)-GPS time series and numerous teleseismic broadband P and S wave seismograms together with iterative forward modeling of the tsunami recordings, indicate rupture propagation ~50 km up dip and ~100 km northwest along strike from the hypocenter, with a rupture velocity of ~1.8 km/s. Subregions with large slip extend from 7 to 10 km depth ~80 km northwest from the hypocenter with a maximum slip of 8 m and from ~5 km depth to beneath thin horizontal sedimentary layers beyond the prism deformation front for ~100 km along strike, with a localized region having >15 m of slip. The seismic moment is 7.2 × 1020 N m. The rupture model indicates that local heterogeneities in the shallow megathrust can accumulate strain that allows some regions near the toe of accretionary prisms to fail in tsunami earthquakes.

  8. Sedimentary Signatures of Submarine Earthquakes: Deciphering the Extent of Sediment Remobilization from the 2011 Tohoku Earthquake and Tsunami and 2010 Haiti Earthquake

    NASA Astrophysics Data System (ADS)

    McHugh, C. M.; Seeber, L.; Moernaut, J.; Strasser, M.; Kanamatsu, T.; Ikehara, K.; Bopp, R.; Mustaque, S.; Usami, K.; Schwestermann, T.; Kioka, A.; Moore, L. M.

    2017-12-01

    The 2004 Sumatra-Andaman Mw9.3 and the 2011 Tohoku (Japan) Mw9.0 earthquakes and tsunamis were huge geological events with major societal consequences. Both were along subduction boundaries and ruptured portions of these boundaries that had been deemed incapable of such events. Submarine strike-slip earthquakes, such as the 2010 Mw7.0 in Haiti, are smaller but may be closer to population centers and can be similarly catastrophic. Both classes of earthquakes remobilize sediment and leave distinct signatures in the geologic record by a wide range of processes that depends on both environment and earthquake characteristics. Understanding them has the potential of greatly expanding the record of past earthquakes, which is critical for geohazard analysis. Recent events offer precious ground truth about the earthquakes and short-lived radioisotopes offer invaluable tools to identify sediments they remobilized. In the 2011 Mw9 Japan earthquake they document the spatial extent of remobilized sediment from water depths of 626m in the forearc slope to trench depths of 8000m. Subbottom profiles, multibeam bathymetry and 40 piston cores collected by the R/V Natsushima and R/V Sonne expeditions to the Japan Trench document multiple turbidites and high-density flows. Core tops enriched in xs210Pb,137Cs and 134Cs reveal sediment deposited by the 2011 Tohoku earthquake and tsunami. The thickest deposits (2m) were documented on a mid-slope terrace and trench (4000-8000m). Sediment was deposited on some terraces (600-3000m), but shed from the steep forearc slope (3000-4000m). The 2010 Haiti mainshock ruptured along the southern flank of Canal du Sud and triggered multiple nearshore sediment failures, generated turbidity currents and stirred fine sediment into suspension throughout this basin. A tsunami was modeled to stem from both sediment failures and tectonics. Remobilized sediment was tracked with short-lived radioisotopes from the nearshore, slope, in fault basins including the

  9. Sensing the earthquake

    NASA Astrophysics Data System (ADS)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  10. HELAC-PHEGAS: A generator for all parton level processes

    NASA Astrophysics Data System (ADS)

    Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata

    2009-10-01

    The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in

  11. The New Zealand Earthquakes and the Role of Schools in Engaging Children in Emotional Processing of Disaster Experiences

    ERIC Educational Resources Information Center

    Mutch, Carol; Gawith, Elizabeth

    2014-01-01

    The earthquakes that rocked the city of Christchurch and surrounding districts in Canterbury, New Zealand, were to take their toll on families, schools and communities. The places that had once represented safety and security for most children were literally and figuratively turned upside down. Rather than reinforce the trauma and continue to…

  12. Fault rupture process and strong ground motion simulation of the 2014/04/01 Northern Chile (Pisagua) earthquake (Mw8.2)

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Suzuki, W.; Aoi, S.

    2014-12-01

    A megathrust earthquake occurred in Northern Chile in April 1, 2014, 23:46 (UTC) (Mw 8.2), in a region that had not experienced a major earthquake since the great 1877 (~M8.6) event. This area had been already identified as a mature seismic gap with a strong interseismic coupling inferred from geodetic measurements (Chlieh et al., JGR, 2011 and Metois et al., GJI, 2013). We used 48 components of strong motion records belonging to the IPOC network in Northern Chile to investigate the source process of the M8.2 Pisagua earthquake. Acceleration waveforms were integrated to get velocities and filtered between 0.02 and 0.125 Hz. We assumed a single fault plane segment with an area of 180 km by 135 km, a strike of 357, and a dip of 18 degrees (GCMT). We set the starting point of rupture at the USGS hypocenter (19.610S, 70.769W, depth 25km), and employed a multi-time-window linear waveform inversion method (Hartzell and Heaton, BSSA, 1983), to derive the rupture process of the Pisagua earthquake. Our results show a slip model characterized by one large slip area (asperity) localized 50 km south of the epicenter, a peak slip of 10 m and a total seismic moment of 2.36 x 1021Nm (Mw 8.2). Fault rupture slowly propagated to the south in front of the main asperity for the initial 25 seconds, and broke it by producing a strong acceleration stage. The fault plane rupture velocity was in average 2.9 km/s. Our calculations show an average stress drop of 4.5MPa for the entire fault rupture area and 12MPa for the asperity area. We simulated the near-source strong ground motion records in a broad frequency band (0.1 ~ 20 Hz), to investigate a possible multi-frequency fault rupture process as the one observed in recent mega-thrust earthquakes such as the 2011 Tohoku-oki (M9.0). Acknowledgments Strong motion data was kindly provided by Chile University as well as the IPOC (Integrated Plate boundary Observatory Chile).

  13. New generation of content addressable memories for associative processing

    NASA Astrophysics Data System (ADS)

    Lewis, H. G., Jr.; Giambalov, Paul

    2000-05-01

    Content addressable memories (CAMS) store both key and association data. A key is presented to the CAN when it is searched and all of the addresses are scanned in parallel to find the address referenced by the key. When a match occurs, the corresponding association is returned. With the explosion of telecommunications packet switching protocols, large data base servers, routers and search engines a new generation of dense sub-micron high throughput CAMS has been developed. The introduction of this paper presents a brief history and tutorial on CAMS, their many uses and advantages, and describes the architecture and functionality of several of MUSIC Semiconductors CAM devices. In subsequent sections of the paper we address using Associative Processing to accommodate the continued increase in sensor resolution, number of spectral bands, required coverage, the desire to implement real-time target cueing, and the data flow and image processing required for optimum performance of reconnaissance and surveillance Unmanned Aerial Vehicles (UAVs). To be competitive the system designer must provide the most computational power, per watt, per dollar, per cubic inch, within the boundaries of cost effective UAV environmental control systems. To address these problems we demonstrate leveraging DARPA and DoD funded Commercial Off-the-Shelf technology to integrate CAM based Associative Processing into a real-time heterogenous multiprocessing system for UAVs and other platforms with limited weight, volume and power budgets.

  14. NG6: Integrated next generation sequencing storage and processing environment.

    PubMed

    Mariette, Jérôme; Escudié, Frédéric; Allias, Nicolas; Salin, Gérald; Noirot, Céline; Thomas, Sylvain; Klopp, Christophe

    2012-09-09

    Next generation sequencing platforms are now well implanted in sequencing centres and some laboratories. Upcoming smaller scale machines such as the 454 junior from Roche or the MiSeq from Illumina will increase the number of laboratories hosting a sequencer. In such a context, it is important to provide these teams with an easily manageable environment to store and process the produced reads. We describe a user-friendly information system able to manage large sets of sequencing data. It includes, on one hand, a workflow environment already containing pipelines adapted to different input formats (sff, fasta, fastq and qseq), different sequencers (Roche 454, Illumina HiSeq) and various analyses (quality control, assembly, alignment, diversity studies,…) and, on the other hand, a secured web site giving access to the results. The connected user will be able to download raw and processed data and browse through the analysis result statistics. The provided workflows can easily be modified or extended and new ones can be added. Ergatis is used as a workflow building, running and monitoring system. The analyses can be run locally or in a cluster environment using Sun Grid Engine. NG6 is a complete information system designed to answer the needs of a sequencing platform. It provides a user-friendly interface to process, store and download high-throughput sequencing data.

  15. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  16. Plasma generating apparatus for large area plasma processing

    DOEpatents

    Tsai, C.C.; Gorbatkin, S.M.; Berry, L.A.

    1991-07-16

    A plasma generating apparatus for plasma processing applications is based on a permanent magnet line-cusp plasma confinement chamber coupled to a compact single-coil microwave waveguide launcher. The device creates an electron cyclotron resonance (ECR) plasma in the launcher and a second ECR plasma is created in the line cusps due to a 0.0875 tesla magnetic field in that region. Additional special magnetic field configuring reduces the magnetic field at the substrate to below 0.001 tesla. The resulting plasma source is capable of producing large-area (20-cm diam), highly uniform (.+-.5%) ion beams with current densities above 5 mA/cm[sup 2]. The source has been used to etch photoresist on 5-inch diam silicon wafers with good uniformity. 3 figures.

  17. Plasma generating apparatus for large area plasma processing

    DOEpatents

    Tsai, Chin-Chi; Gorbatkin, Steven M.; Berry, Lee A.

    1991-01-01

    A plasma generating apparatus for plasma processing applications is based on a permanent magnet line-cusp plasma confinement chamber coupled to a compact single-coil microwave waveguide launcher. The device creates an electron cyclotron resonance (ECR) plasma in the launcher and a second ECR plasma is created in the line cusps due to a 0.0875 tesla magnetic field in that region. Additional special magnetic field configuring reduces the magnetic field at the substrate to below 0.001 tesla. The resulting plasma source is capable of producing large-area (20-cm diam), highly uniform (.+-.5%) ion beams with current densities above 5 mA/cm.sup.2. The source has been used to etch photoresist on 5-inch diam silicon wafers with good uniformity.

  18. Complex earthquake rupture and local tsunamis

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a

  19. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not

  20. Earthquakes in Hawai‘i—an underappreciated but serious hazard

    Okubo, Paul G.; Nakata, Jennifer S.

    2011-01-01

    The State of Hawaii has a history of damaging earthquakes. Earthquakes in the State are primarily the result of active volcanism and related geologic processes. It is not a question of "if" a devastating quake will strike Hawai‘i but rather "when." Tsunamis generated by both distant and local quakes are also an associated threat and have caused many deaths in the State. The U.S. Geological Survey (USGS) and its cooperators monitor seismic activity in the State and are providing crucial information needed to help better prepare emergency managers and residents of Hawai‘i for the quakes that are certain to strike in the future.

  1. Teleseismic and regional data analysis for estimating depth, mechanism and rupture process of the 3 April 2017 MW 6.5 Botswana earthquake and its aftershock (5 April 2017, MW 4.5)

    NASA Astrophysics Data System (ADS)

    Letort, J.; Guilhem Trilla, A.; Ford, S. R.; Sèbe, O.; Causse, M.; Cotton, F.; Campillo, M.; Letort, G.

    2017-12-01

    We constrain the source, depth, and rupture process of the Botswana earthquake of April 3, 2017, as well as its largest aftershock (5 April 2017, Mw 4.5). This earthquake is the largest recorded event (Mw 6.5) in the East African rift system since 1970, making one important case study to better understand source processes in stable continental regions. For the two events an automatic cepstrum analysis (Letort et al., 2015) is first applied on respectively 215 and 219 teleseismic records, in order to detect depth phase arrivals (pP, sP) in the P-coda. Coherent detections of depth phases for different azimuths allow us to estimate the hypocentral depths respectively at 28 and 23 km, suggesting that the events are located in the lower crust. A same cepstrum analysis is conducted on five other earthquakes with mb>4 in this area (from 2002 to 2017), and confirms a deep crustal seismicity cluster (around 20-30 km). The source mechanisms are then characterized using a joint inversion method by fitting both regional long-period surface-waves and teleseismic high-frequency body-waves. Combining regional and teleseismic data (as well as systematic comparisons between theoretical and observed regional surface-waves dispersion curves prior to the inversion) allows us to decrease epistemic uncertainties due to lack of regional data and poor knowledge about the local velocity structure. Focal mechanisms are both constrained as normal faulting with a northwest trending, and hypocentral depths are confirmed at 28 and 24 km. Finally, in order to study the mainshock rupture process, we originally apply a kymograph analysis method (an image processing method, commonly used in the field of cell biology for identifying motions of molecular motors, e.g. Mangeol et al., 2016). Here, the kymograph allows us to better identify high-frequency teleseismic P-arrivals inside the P-coda by tracking both reflected depth phase and direct P-wave arrivals radiated from secondary sources during the

  2. A note on evaluating VAN earthquake predictions

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-Akis; Melis, Nicos S.

    The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

  3. Deep focus earthquakes in the laboratory

    NASA Astrophysics Data System (ADS)

    Schubnel, Alexandre; Brunet, Fabrice; Hilairet, Nadège; Gasc, Julien; Wang, Yanbin; Green, Harry W., II

    2014-05-01

    While the existence of deep earthquakes have been known since the 1920's, the essential mechanical process responsible for them is still poorly understood and remained one of the outstanding unsolved problems of geophysics and rock mechanics. Indeed, deep focus earthquake occur in an environment fundamentally different from that of shallow (<100 km) earthquakes. As pressure and temperature increase with depth however, intra-crystalline plasticity starts to dominate the deformation regime so that rocks yield by plastic flow rather than by brittle fracturing. Olivine phase transitions have provided an attractive alternative mechanism for deep focus earthquakes. Indeed, the Earth mantle transition zone (410-700km) is the locus of the two successive polymorphic transitions of olivine. Such scenario, however, runs into the conceptual barrier of initiating failure in a pressure (P) and temperature (T) regime where deviatoric stress relaxation is expected to be achieved through plastic flow. Here, we performed laboratory deformation experiments on Germanium olivine (Mg2GeO4) under differential stress at high pressure (P=2-5GPa) and within a narrow temperature range (T=1000-1250K). We find that fractures nucleate at the onset of the olivine to spinel transition. These fractures propagate dynamically (i.e. at a non-negligible fraction of the shear wave velocity) so that intense acoustic emissions are generated. Similar to deep-focus earthquakes, these acoustic emissions arise from pure shear sources, and obey the Gutenberg-Richter law without following Omori's law. Microstructural observations prove that dynamic weakening likely involves superplasticity of the nanocrystalline spinel reaction product at seismic strain rates. Although in our experiments the absolute stress value remains high compared to stresses expected within the cold core of subducted slabs, the observed stress drops are broadly consistent with those calculated for deep earthquakes. Constant differential

  4. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  5. Understanding Earthquakes

    ERIC Educational Resources Information Center

    Davis, Amanda; Gray, Ron

    2018-01-01

    December 26, 2004 was one of the deadliest days in modern history, when a 9.3 magnitude earthquake--the third largest ever recorded--struck off the coast of Sumatra in Indonesia (National Centers for Environmental Information 2014). The massive quake lasted at least 10 minutes and devastated the Indian Ocean. The quake displaced an estimated…

  6. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  7. Crustal velocity structure and earthquake processes of Garhwal-Kumaun Himalaya: Constraints from regional waveform inversion and array beam modeling

    NASA Astrophysics Data System (ADS)

    Negi, Sanjay S.; Paul, Ajay; Cesca, Simone; Kamal; Kriegerowski, Marius; Mahesh, P.; Gupta, Sandeep

    2017-08-01

    In order to understand present day earthquake kinematics at the Indian plate boundary, we analyse seismic broadband data recorded between 2007 and 2015 by the regional network in the Garhwal-Kumaun region, northwest Himalaya. We first estimate a local 1-D velocity model for the computation of reliable Green's functions, based on 2837 P-wave and 2680 S-wave arrivals from 251 well located earthquakes. The resulting 1-D crustal structure yields a 4-layer velocity model down to the depths of 20 km. A fifth homogeneous layer extends down to 46 km, constraining the Moho using travel-time distance curve method. We then employ a multistep moment tensor (MT) inversion algorithm to infer seismic moment tensors of 11 moderate earthquakes with Mw magnitude in the range 4.0-5.0. The method provides a fast MT inversion for future monitoring of local seismicity, since Green's functions database has been prepared. To further support the moment tensor solutions, we additionally model P phase beams at seismic arrays at teleseismic distances. The MT inversion result reveals the presence of dominant thrust fault kinematics persisting along the Himalayan belt. Shallow low and high angle thrust faulting is the dominating mechanism in the Garhwal-Kumaun Himalaya. The centroid depths for these moderate earthquakes are shallow between 1 and 12 km. The beam modeling result confirm hypocentral depth estimates between 1 and 7 km. The updated seismicity, constrained source mechanism and depth results indicate typical setting of duplexes above the mid crustal ramp where slip is confirmed along out-of-sequence thrusting. The involvement of Tons thrust sheet in out-of-sequence thrusting indicate Tons thrust to be the principal active thrust at shallow depth in the Himalayan region. Our results thus support the critical taper wedge theory, where we infer the microseismicity cluster as a result of intense activity within the Lesser Himalayan Duplex (LHD) system.

  8. Radon observations as an integrated part of the multi parameter approach to study pre-earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Lee, Lou; Giuliani, Guachino; Fu, Ching-Chou; Liu, Tiger; Hattori, Katsumi

    2017-04-01

    This work is part of international project to study the complex chain of interactions Lithosphere - Atmosphere -Ionosphere (LAI) in presence of ionization in atmosphere loaded by radon and other gases and is supported by International Space Science Institute (ISSI) in Bern and Beijing. We are presenting experimental measurements and theoretical estimates showing that radon measurements recorded before large earthquake are correlated with release of the heat flux in atmosphere during ionization of the atmospheric boundary layer .The recorded anomalous heat (observed by the remote sounding -infrared radiometers installed on satellites) are followed also by ionospheric anomalies (observed by GPS/TEC, ionosonde or satellite instruments). As ground proof we are using radon measurements installed and coordinated in four different seismic active regions California, Taiwan, Italy and Japan. Radon measurements are performed indirectly by means of gamma ray spectrometry of its radioactive progenies 214Pb and 214Bi (emitted at 351 keV and 609 keV, respectively) and also by Alfa detectors. We present data of five physical parameters- radon, seismicity, temperature of the atmosphere boundary layer, outgoing earth infrared radiation and GPS/TEC and their temporal and spatial variations several days before the onset of the following recent earthquakes: (1) 2016 M6.6 in California; (2) 2016 Amatrice-Norcia (Central Italy), (3) 2016 M6.4 of Feb 06 in Taiwan and (4) 2016 M7.0 of Nov 21 in Japan. Our preliminary results of simultaneous analysis of radon and space measurements in California, Italy, Taiwan and Japan suggests that pre-earthquake phase follows a general temporal-spatial evolution pattern in which radon plays a critical role in understanding the LAI coupling. This pattern could be reviled only with multi instruments observations and been seen and in other large earthquakes worldwide.

  9. An integrated MEMS infrastructure for fuel processing: hydrogen generation and separation for portable power generation

    NASA Astrophysics Data System (ADS)

    Varady, M. J.; McLeod, L.; Meacham, J. M.; Degertekin, F. L.; Fedorov, A. G.

    2007-09-01

    Portable fuel cells are an enabling technology for high efficiency and ultra-high density distributed power generation, which is essential for many terrestrial and aerospace applications. A key element of fuel cell power sources is the fuel processor, which should have the capability to efficiently reform liquid fuels and produce high purity hydrogen that is consumed by the fuel cells. To this end, we are reporting on the development of two novel MEMS hydrogen generators with improved functionality achieved through an innovative process organization and system integration approach that exploits the advantages of transport and catalysis on the micro/nano scale. One fuel processor design utilizes transient, reverse-flow operation of an autothermal MEMS microreactor with an intimately integrated, micromachined ultrasonic fuel atomizer and a Pd/Ag membrane for in situ hydrogen separation from the product stream. The other design features a simpler, more compact planar structure with the atomized fuel ejected directly onto the catalyst layer, which is coupled to an integrated hydrogen selective membrane.

  10. Subliminal trauma reminders impact neural processing of cognitive control in adults with developmental earthquake trauma: a preliminary report.

    PubMed

    Du, Xue; Li, Yu; Ran, Qian; Kim, Pilyoung; Ganzel, Barbara L; Liang, GuangSheng; Hao, Lei; Zhang, Qinglin; Meng, Huaqing; Qiu, Jiang

    2016-03-01

    Little is known about the effects of developmental trauma on the neural basis of cognitive control among adults who do not have posttraumatic stress disorder. To examine this question, we used functional magnetic resonance imaging to compare the effect of subliminal priming with earthquake-related images on attentional control during a Stroop task in survivors of the 2008 Wenchuan earthquake in China (survivor group, survivors were adolescents at the time of the earthquake) and in matched controls (control group). We found that the survivor group showed greater activation in the left ventral anterior cingulate cortex (vACC) and the bilateral parahippocampal gyrus during the congruent versus incongruent condition, as compared to the control group. Depressive symptoms were positively correlated with left vACC activation during the congruent condition. Moreover, psychophysiological interaction results showed that the survivor group had stronger functional connectivity between the left parahippocampal gyrus and the left vACC than the control group under the congruent-incongruent condition. These results suggested that trauma-related information was linked to abnormal activity in brain networks associated with cognitive control (e.g., vACC-parahippocampal gyrus). This may be a potential biomarker for depression following developmental trauma, and it may also provide a mechanism linking trauma reminders with depression.

  11. Structural control on the Tohoku earthquake rupture process investigated by 3D FEM, tsunami and geodetic data

    PubMed Central

    Romano, F.; Trasatti, E.; Lorito, S.; Piromallo, C.; Piatanesi, A.; Ito, Y.; Zhao, D.; Hirata, K.; Lanucara, P.; Cocco, M.

    2014-01-01

    The 2011 Tohoku earthquake (Mw = 9.1) highlighted previously unobserved features for megathrust events, such as the large slip in a relatively limited area and the shallow rupture propagation. We use a Finite Element Model (FEM), taking into account the 3D geometrical and structural complexities up to the trench zone, and perform a joint inversion of tsunami and geodetic data to retrieve the earthquake slip distribution. We obtain a close spatial correlation between the main deep slip patch and the local seismic velocity anomalies, and large shallow slip extending also to the North coherently with a seismically observed low-frequency radiation. These observations suggest that the friction controlled the rupture, initially confining the deeper rupture and then driving its propagation up to the trench, where it spreads laterally. These findings are relevant to earthquake and tsunami hazard assessment because they may help to detect regions likely prone to rupture along the megathrust, and to constrain the probability of high slip near the trench. Our estimate of ~40 m slip value around the JFAST (Japan Trench Fast Drilling Project) drilling zone contributes to constrain the dynamic shear stress and friction coefficient of the fault obtained by temperature measurements to ~0.68 MPa and ~0.10, respectively. PMID:25005351

  12. Structural control on the Tohoku earthquake rupture process investigated by 3D FEM, tsunami and geodetic data.

    PubMed

    Romano, F; Trasatti, E; Lorito, S; Piromallo, C; Piatanesi, A; Ito, Y; Zhao, D; Hirata, K; Lanucara, P; Cocco, M

    2014-07-09

    The 2011 Tohoku earthquake (Mw = 9.1) highlighted previously unobserved features for megathrust events, such as the large slip in a relatively limited area and the shallow rupture propagation. We use a Finite Element Model (FEM), taking into account the 3D geometrical and structural complexities up to the trench zone, and perform a joint inversion of tsunami and geodetic data to retrieve the earthquake slip distribution. We obtain a close spatial correlation between the main deep slip patch and the local seismic velocity anomalies, and large shallow slip extending also to the North coherently with a seismically observed low-frequency radiation. These observations suggest that the friction controlled the rupture, initially confining the deeper rupture and then driving its propagation up to the trench, where it spreads laterally. These findings are relevant to earthquake and tsunami hazard assessment because they may help to detect regions likely prone to rupture along the megathrust, and to constrain the probability of high slip near the trench. Our estimate of ~40 m slip value around the JFAST (Japan Trench Fast Drilling Project) drilling zone contributes to constrain the dynamic shear stress and friction coefficient of the fault obtained by temperature measurements to ~0.68 MPa and ~0.10, respectively.

  13. Source Process of the Mw 5.0 Au Sable Forks, New York, Earthquake Sequence from Local Aftershock Monitoring Network Data

    NASA Astrophysics Data System (ADS)

    Kim, W.; Seeber, L.; Armbruster, J. G.

    2002-12-01

    On April 20, 2002, a Mw 5 earthquake occurred near the town of Au Sable Forks, northeastern Adirondacks, New York. The quake caused moderate damage (MMI VII) around the epicentral area and it is well recorded by over 50 broadband stations in the distance ranges of 70 to 2000 km in the Eastern North America. Regional broadband waveform data are used to determine source mechanism and focal depth using moment tensor inversion technique. Source mechanism indicates predominantly thrust faulting along 45° dipping fault plane striking due South. The mainshock is followed by at least three strong aftershocks with local magnitude (ML) greater than 3 and about 70 aftershocks are detected and located in the first three months by a 12-station portable seismographic network. The aftershock distribution clearly delineate the mainshock rupture to the westerly dipping fault plane at a depth of 11 to 12 km. Preliminary analysis of the aftershock waveform data indicates that orientation of the P-axis rotated 90° from that of the mainshock, suggesting a complex source process of the earthquake sequence. We achieved an important milestone in monitoring earthquakes and evaluating their hazards through rapid cross-border (Canada-US) and cross-regional (Central US-Northeastern US) collaborative efforts. Hence, staff at Instrument Software Technology, Inc. near the epicentral area joined Lamont-Doherty staff and deployed the first portable station in the epicentral area; CERI dispatched two of their technical staff to the epicentral area with four accelerometers and a broadband seismograph; the IRIS/PASSCAL facility shipped three digital seismographs and ancillary equipment within one day of the request; the POLARIS Consortium, Canada sent a field crew of three with a near real-time, satellite telemetry based earthquake monitoring system. The Polaris station, KSVO, powered by a solar panel and batteries, was already transmitting data to the central Hub in London, Ontario, Canada within

  14. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  15. Rapid Process to Generate Beam Envelopes for Optical System Analysis

    NASA Technical Reports Server (NTRS)

    Howard, Joseph; Seals, Lenward

    2012-01-01

    The task of evaluating obstructions in the optical throughput of an optical system requires the use of two disciplines, and hence, two models: optical models for the details of optical propagation, and mechanical models for determining the actual structure that exists in the optical system. Previous analysis methods for creating beam envelopes (or cones of light) for use in this obstruction analysis were found to be cumbersome to calculate and take significant time and resources to complete. A new process was developed that takes less time to complete beam envelope analysis, is more accurate and less dependent upon manual node tracking to create the beam envelopes, and eases the burden on the mechanical CAD (computer-aided design) designers to form the beam solids. This algorithm allows rapid generation of beam envelopes for optical system obstruction analysis. Ray trace information is taken from optical design software and used to generate CAD objects that represent the boundary of the beam envelopes for detailed analysis in mechanical CAD software. Matlab is used to call ray trace data from the optical model for all fields and entrance pupil points of interest. These are chosen to be the edge of each space, so that these rays produce the bounding volume for the beam. The x and y global coordinate data is collected on the surface planes of interest, typically an image of the field and entrance pupil internal of the optical system. This x and y coordinate data is then evaluated using a convex hull algorithm, which removes any internal points, which are unnecessary to produce the bounding volume of interest. At this point, tolerances can be applied to expand the size of either the field or aperture, depending on the allocations. Once this minimum set of coordinates on the pupil and field is obtained, a new set of rays is generated between the field plane and aperture plane (or vice-versa). These rays are then evaluated at planes between the aperture and field, at a

  16. The physics of an earthquake

    NASA Astrophysics Data System (ADS)

    McCloskey, John

    2008-03-01

    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  17. Triangle Geometry Processing for Surface Modeling and Cartesian Grid Generation

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J. (Inventor); Melton, John E. (Inventor); Berger, Marsha J. (Inventor)

    2002-01-01

    Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.

  18. Triangle geometry processing for surface modeling and cartesian grid generation

    DOEpatents

    Aftosmis, Michael J [San Mateo, CA; Melton, John E [Hollister, CA; Berger, Marsha J [New York, NY

    2002-09-03

    Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.

  19. How citizen seismology is transforming rapid public earthquake information: the example of LastQuake smartphone application and Twitter QuakeBot

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Etivant, C.; Roussel, F.; Mazet-Roux, G.; Steed, R.

    2014-12-01

    Smartphone applications have swiftly become one of the most popular tools for rapid reception of earthquake information for the public. Wherever someone's own location is, they can be automatically informed when an earthquake has struck just by setting a magnitude threshold and an area of interest. No need to browse the internet: the information reaches you automatically and instantaneously! One question remains: are the provided earthquake notifications always relevant for the public? A while after damaging earthquakes many eyewitnesses scrap the application they installed just after the mainshock. Why? Because either the magnitude threshold is set too high and many felt earthquakes are missed, or it is set too low and the majority of the notifications are related to unfelt earthquakes thereby only increasing anxiety among the population at each new update. Felt and damaging earthquakes are the ones of societal importance even when of small magnitude. LastQuake app and Twitter feed (QuakeBot) focuses on these earthquakes that matter for the public by collating different information threads covering tsunamigenic, damaging and felt earthquakes. Non-seismic detections and macroseismic questionnaires collected online are combined to identify felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the USGS, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. We will present the identification process of the felt earthquakes, the smartphone application and the 27 automatically generated tweets and how, by providing better public services, we collect more data from citizens.

  20. Laboratory investigations of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Xia, Kaiwen

    In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.

  1. Plasma generation and processing of interstellar carbonaceous dust analogs

    NASA Astrophysics Data System (ADS)

    Peláez, R. J.; Maté, B.; Tanarro, I.; Molpeceres, G.; Jiménez-Redondo, M.; Timón, V.; Escribano, R.; Herrero, V. J.

    2018-03-01

    Interstellar (IS) dust analogs, based on amorphous hydrogenated carbon (a-C:H) were generated by plasma deposition in radio frequency discharges of CH4 + He mixtures. The a-C:H samples were characterized by means of secondary electron microscopy, infrared (IR) spectroscopy and UV-visible reflectivity. DFT calculations of structure and IR spectra were also carried out. From the experimental data, atomic compositions were estimated. Both IR and reflectivity measurements led to similar high proportions (≈50%) of H atoms, but there was a significant discrepancy in the sp2/sp3 hybridization ratios of C atoms (sp2/sp3 = 1.5 from IR and 0.25 from reflectivity). Energetic processing of the samples with 5 keV electrons led to a decay of IR aliphatic bands and to a growth of aromatic bands, which is consistent with a dehydrogenation and graphitization of the samples. The decay of the CH aliphatic stretching band at 3.4 μm upon electron irradiation is relatively slow. Estimates based on the absorbed energy and on models of cosmic ray (CR) flux indicate that CR bombardment is not enough to justify the observed disappearance of this band in dense IS clouds.

  2. Dystrophic Cardiomyopathy: Complex Pathobiological Processes to Generate Clinical Phenotype

    PubMed Central

    Tsuda, Takeshi; Fitzgerald, Kristi K.

    2017-01-01

    Duchenne muscular dystrophy (DMD), Becker muscular dystrophy (BMD), and X-linked dilated cardiomyopathy (XL-DCM) consist of a unique clinical entity, the dystrophinopathies, which are due to variable mutations in the dystrophin gene. Dilated cardiomyopathy (DCM) is a common complication of dystrophinopathies, but the onset, progression, and severity of heart disease differ among these subgroups. Extensive molecular genetic studies have been conducted to assess genotype-phenotype correlation in DMD, BMD, and XL-DCM to understand the underlying mechanisms of these diseases, but the results are not always conclusive, suggesting the involvement of complex multi-layers of pathological processes that generate the final clinical phenotype. Dystrophin protein is a part of dystrophin-glycoprotein complex (DGC) that is localized in skeletal muscles, myocardium, smooth muscles, and neuronal tissues. Diversity of cardiac phenotype in dystrophinopathies suggests multiple layers of pathogenetic mechanisms in forming dystrophic cardiomyopathy. In this review article, we review the complex molecular interactions involving the pathogenesis of dystrophic cardiomyopathy, including primary gene mutations and loss of structural integrity, secondary cellular responses, and certain epigenetic and other factors that modulate gene expressions. Involvement of epigenetic gene regulation appears to lead to specific cardiac phenotypes in dystrophic hearts. PMID:29367543

  3. Numerical experiment on tsunami deposit distribution process by using tsunami sediment transport model in historical tsunami event of megathrust Nankai trough earthquake

    NASA Astrophysics Data System (ADS)

    Imai, K.; Sugawara, D.; Takahashi, T.

    2017-12-01

    A large flow caused by tsunami transports sediments from beach and forms tsunami deposits in land and coastal lakes. A tsunami deposit has been found in their undisturbed on coastal lakes especially. Okamura & Matsuoka (2012) found some tsunami deposits in the field survey of coastal lakes facing to the Nankai trough, and tsunami deposits due to the past eight Nankai Trough megathrust earthquakes they identified. The environment in coastal lakes is stably calm and suitable for tsunami deposits preservation compared to other topographical conditions such as plains. Therefore, there is a possibility that the recurrence interval of megathrust earthquakes and tsunamis will be discussed with high resolution. In addition, it has been pointed out that small events that cannot be detected in plains could be separated finely (Sawai, 2012). Various aspects of past tsunami is expected to be elucidated, in consideration of topographical conditions of coastal lakes by using the relationship between the erosion-and-sedimentation process of the lake bottom and the external force of tsunami. In this research, numerical examination based on tsunami sediment transport model (Takahashi et al., 1999) was carried out on the site Ryujin-ike pond of Ohita, Japan where tsunami deposit was identified, and deposit migration analysis was conducted on the tsunami deposit distribution process of historical Nankai Trough earthquakes. Furthermore, examination of tsunami source conditions is possibly investigated by comparison studies of the observed data and the computation of tsunami deposit distribution. It is difficult to clarify details of tsunami source from indistinct information of paleogeographical conditions. However, this result shows that it can be used as a constraint condition of the tsunami source scale by combining tsunami deposit distribution in lakes with computation data.

  4. Multiple runoff processes and multiple thresholds control agricultural runoff generation

    NASA Astrophysics Data System (ADS)

    Saffarpour, Shabnam; Western, Andrew W.; Adams, Russell; McDonnell, Jeffrey J.

    2016-11-01

    Thresholds and hydrologic connectivity associated with runoff processes are a critical concept for understanding catchment hydrologic response at the event timescale. To date, most attention has focused on single runoff response types, and the role of multiple thresholds and flow path connectivities has not been made explicit. Here we first summarise existing knowledge on the interplay between thresholds, connectivity and runoff processes at the hillslope-small catchment scale into a single figure and use it in examining how runoff response and the catchment threshold response to rainfall affect a suite of runoff generation mechanisms in a small agricultural catchment. A 1.37 ha catchment in the Lang Lang River catchment, Victoria, Australia, was instrumented and hourly data of rainfall, runoff, shallow groundwater level and isotope water samples were collected. The rainfall, runoff and antecedent soil moisture data together with water levels at several shallow piezometers are used to identify runoff processes in the study site. We use isotope and major ion results to further support the findings of the hydrometric data. We analyse 60 rainfall events that produced 38 runoff events over two runoff seasons. Our results show that the catchment hydrologic response was typically controlled by the Antecedent Soil Moisture Index and rainfall characteristics. There was a strong seasonal effect in the antecedent moisture conditions that led to marked seasonal-scale changes in runoff response. Analysis of shallow well data revealed that streamflows early in the runoff season were dominated primarily by saturation excess overland flow from the riparian area. As the runoff season progressed, the catchment soil water storage increased and the hillslopes connected to the riparian area. The hillslopes transferred a significant amount of water to the riparian zone during and following events. Then, during a particularly wet period, this connectivity to the riparian zone, and

  5. Detection of dominant runoff generation processes in flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro; Gioia, Andrea; Manfreda, Salvatore

    2010-05-01

    The investigation on hydrologic similarity represents one of the most exciting challenges faced by hydrologists in the last few years, in order to reduce uncertainty on flood prediction in ungauged basins (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al., 2003). In perspective, the identification of dominant runoff generation mechanisms may provide a strategy for catchment classification and identification hydrologically omogeneous regions. In this context, we exploited the framework of theoretically derived flood probability distributions, in order to interpret the physical behavior of real basins. Recent developments on theoretically derived distributions have highlighted that in a given basin different runoff processes may coexistence and modify or affect the shape of flood distributions. The identification of dominant runoff generation mechanisms represents a key signatures of flood distributions providing an insight in hydrologic similarity. Iacobellis and Fiorentino (2000) introduced a novel distribution of flood peak annual maxima, the "IF" distribution, which exploited the variable source area concept, coupled with a runoff threshold having scaling properties. More recently, Gioia et al (2008) introduced the Two Component-IF (TCIF) distribution, generalizing the IF distribution, based on two different threshold mechanisms, associated respectively to ordinary and extraordinary events. Indeed, ordinary floods are mostly due to rainfall events exceeding a threshold infiltration rate in a small source area, while the so-called outlier events, often responsible of the high skewness of flood distributions, are triggered by severe rainfalls exceeding a threshold storage in a large portion of the basin. Within this scheme, we focused on the application of both models (IF and TCIF) over a considerable number of catchments belonging to different regions of Southern Italy. In particular, we stressed, as a case of strong general interest in

  6. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, Lian; Li, Haibing; Brodsky, Emily

    2013-04-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (~200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ~ 30o, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  7. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, L.; Li, H.; Brodsky, E. E.; Wang, H.; Pei, J.

    2012-12-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (˜200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ˜30°, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  8. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding

  9. Fractal dynamics of earthquakes

    SciT

    Bak, P.; Chen, K.

    1995-05-01

    Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function ofmore » time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).« less

  10. Road Damage Following Earthquake

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Ground shaking triggered liquefaction in a subsurface layer of water-saturated sand, producing differential lateral and vertical movement in a overlying carapace of unliquified sand and slit, which moved from right to left towards the Pajaro River. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage caused by the Oct. 17, 1989, Loma Prieta earthquake. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: S.D. Ellen, U.S. Geological Survey

  11. Tectonic control on sediment accretion and subduction off south central Chile: Implications for coseismic rupture processes of the 1960 and 2010 megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Contreras-Reyes, Eduardo; Flueh, Ernst R.; Grevemeyer, Ingo

    2010-12-01

    Based on a compilation of published and new seismic refraction and multichannel seismic reflection data along the south central Chile margin (33°-46°S), we study the processes of sediment accretion and subduction and their implications on megathrust seismicity. In terms of the frontal accretionary prism (FAP) size, the marine south central Chile fore arc can be divided in two main segments: (1) the Maule segment (south of the Juan Fernández Ridge and north of the Mocha block) characterized by a relative large FAP (20-40 km wide) and (2) the Chiloé segment (south of the Mocha block and north of the Nazca-Antarctic-South America plates junction) characterized by a small FAP (≤10 km wide). In addition, the Maule and Chiloé segments correlate with a thin (<1 km thick) and thick (˜1.5 km thick) subduction channel, respectively. The Mocha block lies between ˜37.5° and 40°S and is configured by the Chile trench, Mocha and Valdivia fracture zones. This region separates young (0-25 Ma) oceanic lithosphere in the south from old (30-35 Ma) oceanic lithosphere in the north, and it represents a fundamental tectonic boundary separating two different styles of sediment accretion and subduction, respectively. A process responsible for this segmentation could be related to differences in initial angles of subduction which in turn depend on the amplitude of the down-deflected oceanic lithosphere under trench sediment loading. On the other hand, a small FAP along the Chiloé segment is coincident with the rupture area of the trans-Pacific tsunamigenic 1960 earthquake (Mw = 9.5), while a relatively large FAP along the Maule segment is coincident with the rupture area of the 2010 earthquake (Mw = 8.8). Differences in earthquake and tsunami magnitudes between these events can be explained in terms of the FAP size along the Chiloé and Maule segments that control the location of the updip limit of the seismogenic zone. The rupture area of the 1960 event also correlates with a

  12. Source rupture process of the 12 January 2010 Port-au-Prince (Haiti, Mw7.0) earthquake

    NASA Astrophysics Data System (ADS)

    Borges, José; Caldeira, Bento; Bezzeghoud, Mourad; Santos, Rúben

    2010-05-01

    The Haiti earthquake occurred on tuesday, January 12, 2010 at 21:53:10 UTC. Its epicenter was at 18.46 degrees North, 72.53 degrees West, about 25 km WSW of Haiti's capital, Port-au-Prince. The earthquake was relatively shallow (H=13 km, U.S. Geological Survey) and thus had greater intensity and destructiveness. The earthquake occurred along the tectonic boundary between Caribbean and North America plate. This plate boundary is dominated by left-lateral strike slip motion and compression with 2 cm/year of slip velocity eastward with respect to the North America plate. The moment magnitude was measured to be 7.0 (U.S. Geological Survey) and 7.1 (Harvard Centroid-Moment-Tensor (CMT). More than 10 aftershocks ranging from 5.0 to 5.9 in magnitude (none of magnitude larger than 6.0) struck the area in hours following the main shock. Most of these aftershocks have occurred to the West of the mainshock in the Mirogoane Lakes region and its distribution suggests that the length of the rupture was around 70 km. The Harvard Centroid Moment Tensor (CMT) mechanism solution indicates lefth-lateral strike slip movement with a fault plane trending toward (strike = 251o ; dip = 70o; rake = 28o). In order to obtain the spatiotemporal slip distribution of a finite rupture model we have used teleseismic body wave and the Kikuchi and Kanamori's method [1]. Rupture velocity was constrained by using the directivity effect determined from a set of waveforms well recorded at regional and teleseismic distances [2]. Finally, we compared a map of aftershocks with the Coulomb stress changes caused by the event in the region [3]. [1]- Kikuchi, M., and Kanamori, H., 1982, Inversion of complex body waves: Bull. Seismol. Soc. Am., v. 72, p. 491-506. [2] Caldeira B., Bezzeghoud M, Borges JF, 2009; DIRDOP: a directivity approach to determining the seismic rupture velocity vector. J Seismology, DOI 10.1007/s10950-009-9183-x (http://www.springerlink.com/content/xp524g2225628773/) [3] -King, G. C. P

  13. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  14. Physics of Earthquake Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  15. Modelling Analysis of Students' Processes of Generating Scientific Explanatory Hypotheses

    ERIC Educational Resources Information Center

    Park, Jongwon

    2006-01-01

    It has recently been determined that generating an explanatory hypothesis to explain a discrepant event is important for students' conceptual change. The purpose of this study is to investigate how students' generate new explanatory hypotheses. To achieve this goal, questions are used to identify students prior ideas related to electromagnetic…

  16. Catalog of earthquakes along the San Andreas fault system in Central California, July-September 1972

    Wesson, R.L.; Meagher, K.L.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period July - September, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). Catalogs for the first and second quarters of 1972 have been prepared by Wessan and others (1972 a & b). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1254 earthquakes in Central California. Arrival times at 129 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 104 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB), the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the seismicity of a portion of central California in much greater detail.

  17. Explosion Generated Seismic Waves and P/S Methods of Discrimination from Earthquakes with Insights from the Nevada Source Physics Experiments

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Ford, S. R.; Pitarka, A.; Pyle, M. L.; Pasyanos, M.; Mellors, R. J.; Dodge, D. A.

    2017-12-01

    The relative amplitudes of seismic P-waves to S-waves are effective at identifying underground explosions among a background of natural earthquakes. These P/S methods appear to work best at frequencies above 2 Hz and at regional distances ( >200 km). We illustrate this with a variety of historic nuclear explosion data as well as with the recent DPRK nuclear tests. However, the physical basis for the generation of explosion S-waves, and therefore the predictability of this P/S technique as a function of path, frequency and event properties such as size, depth, and geology, remains incompletely understood. A goal of current research, such as the Source Physics Experiments (SPE), is to improve our physical understanding of the mechanisms of explosion S-wave generation and advance our ability to numerically model and predict them. The SPE conducted six chemical explosions between 2011 and 2016 in the same borehole in granite in southern Nevada. The explosions were at a variety of depths and sizes, ranging from 0.1 to 5 tons TNT equivalent yield. The largest were observed at near regional distances, with P/S ratios comparable to much larger historic nuclear tests. If we control for material property effects, the explosions have very similar P/S ratios independent of yield or magnitude. These results are consistent with explosion S-waves coming mainly from conversion of P- and surface waves, and are inconsistent with source-size based models. A dense sensor deployment for the largest SPE explosion allowed this conversion to be mapped in detail. This is good news for P/S explosion identification, which can work well for very small explosions and may be ultimately limited by S-wave detection thresholds. The SPE also showed explosion P-wave source models need to be updated for small and/or deeply buried cases. We are developing new P- and S-wave explosion models that better match all the empirical data. Historic nuclear explosion seismic data shows that the media in which

  18. Field Experiments Aimed To The Analysis of Flood Generation Processes

    NASA Astrophysics Data System (ADS)

    Carriero, D.; Iacobellis, V.; Oliveto, G.; Romano, N.; Telesca, V.; Fiorentino, M.

    The study of the soil moisture dynamics and of the climate-soil-vegetation interac- tion is essential for the comprehension of possible climatic change phenomena, as well as for the analysis of occurrence of extreme hydrological events. In this trend the theoretically-based distribution of floods recently derived by Fiorentino and Ia- cobellis, [ŞNew insights about the climatic and geologic control on the probability distribution of floodsT, Water Resources Research, 2001, 37: 721-730] demonstrated, by an application in some Southern Italy basins, that processes at the hillslope scale strongly influence the basin response by means of the different mechanisms of runoff generation produced by various distributions of partial area contributing. This area is considered as a stochastic variable whose pdf position parameter showed strong de- pendence on the climate as it can seen in the studied basins behavior: in dry zones, where there is the prevalence of the infiltration excess (Horton) mechanism, the basin water loss parameter decreases as basin area increases and the flood peak source area depends on the permeability of soils; in humid zones, with the prevalence of satu- ration excess (Dunne) process, the loss parameter seems independent from the basin area and very sensitive to simple climatic index while only small portion of the area invested by the storm contributes to floods. The purpose of this work is to investigate the consistency of those interpretations by means of field experiments at the hillslope scale to establish a parameterization accounting for soil physical and hydraulic prop- erties, vegetation characteristics and land-use. The research site is the catchment of River Fiumarella di Corleto, which is located in Basilicata Region, Italy, and has a drainage area of approximately 32 km2. The environment has a rather dynamic geo- morphology and very interesting features from the soil-landscape modeling viewpoint [Santini A., A. Coppola, N. Romano, and

  19. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  20. Distribution and Characteristics of Repeating Earthquakes in Northern California

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.; Zechar, J. D.; Shaw, B. E.

    2012-12-01

    Repeating earthquakes are playing an increasingly important role in the study of fault processes and behavior, and have the potential to improve hazard assessment, earthquake forecast, and seismic monitoring capabilities. These events rupture the same fault patch repeatedly, generating virtually identical seismograms. In California, repeating earthquakes have been found predominately along the creeping section of the central San Andreas Fault, where they are believed to represent failing asperities on an otherwise creeping fault. Here, we use the northern California double-difference catalog of 450,000 precisely located events (1984-2009) and associated database of 2 billion waveform cross-correlation measurements to systematically search for repeating earthquakes across various tectonic regions. An initial search for pairs of earthquakes with high-correlation coefficients and similar magnitudes resulted in 4,610 clusters including a total of over 26,000 earthquakes. A subsequent double-difference re-analysis of these clusters resulted in 1,879 sequences (8,640 events) where a common rupture area can be resolved to the precision of a few tens of meters or less. These repeating earthquake sequences (RES) include between 3 and 24 events with magnitudes up to ML=4. We compute precise relative magnitudes between events in each sequence from differential amplitude measurements. Differences between these and standard coda-duration magnitudes have a standard deviation of 0.09. The RES occur throughout northern California, but RES with 10 or more events (6%) only occur along the central San Andreas and Calaveras faults. We are establishing baseline characteristics for each sequence, such as recurrence intervals and their coefficient of variation (CV), in order to compare them across tectonic regions. CVs for these clusters range from 0.002 to 2.6, indicating a range of behavior between periodic occurrence (CV~0), random occurrence, and temporal clustering. 10% of the RES

  1. The Effects of Generative Learning Strategy Prompts and Metacognitive Feedback on Learners' Self-Regulation, Generation Process, and Achievement

    ERIC Educational Resources Information Center

    Lee, Hyeon Woo

    2008-01-01

    Instructional designers need to understand the internal processes of learning, identify learners' cognitive difficulties with those processes, and create strategies to help learners overcome those difficulties. Generative learning theory, one conception of human learning about cognitive functioning and process, emphasizes that meaningful learning…

  2. 25 April 2015 Gorkha Earthquake in Nepal Himalaya (Part 2)

    NASA Astrophysics Data System (ADS)

    Rao, N. Purnachandra; Burgmann, Roland; Mugnier, Jean-Louis; Gahalaut, Vineet; Pandey, Anand

    2017-06-01

    The response from the geosciences community working on Himalaya in general, and the 2015 Nepal earthquakes in specific, was overwhelming, and after a rigorous review process, thirteen papers were selected and published in Part-1. We are still left with a few good papers which are being brought out as Part-2 of the special issue. In the opening article Jean-Louis Mugnier and colleagues attempt to provide a structural geological perspective of the 25 April 2015 Gorkha earthquake and highlight the role of segmentation in generating the Himalayan mega-thrusts. They could infer segmentation by stable barriers in the HT that define barrier-type earthquake families. In yet another interesting piece of work, Pandey and colleagues map the crustal structure across the earthquake volume using Receiver function approach and infer a 5-km thick low velocity layer that connects to the MHT ramp. They are also able to correlate the rupture termination with the highest point of coseismic uplift. The last paper by Shen et al. highlights the usefulness of INSAR technique in mapping the coseismic slip distribution applied to the 25 April 2015 Gorkha earthquake. They infer low stress drop and corner frequency which coupled with hybrid modeling explain the low level of slip heterogeneity and frequency of ground motion. We compliment the journal of Asian Earth Sciences for bringing out the two volumes and do hope that these efforts have made a distinct impact on furthering our understanding of seismogenesis in Himalaya using the very latest data sets.

  3. Earthquakes induced by fluid injection and explosion

    Healy, J.H.; Hamilton, R.M.; Raleigh, C.B.

    1970-01-01

    Earthquakes generated by fluid injection near Denver, Colorado, are compared with earthquakes triggered by nuclear explosion at the Nevada Test Site. Spatial distributions of the earthquakes in both cases are compatible with the hypothesis that variation of fluid pressure in preexisting fractures controls the time distribution of the seismic events in an "aftershock" sequence. We suggest that the fluid pressure changes may also control the distribution in time and space of natural aftershock sequences and of earthquakes that have been reported near large reservoirs. ?? 1970.

  4. Graphics processing unit (GPU) real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  5. The Alaska earthquake, March 27, 1964: lessons and conclusions

    Eckel, Edwin B.

    1970-01-01

    One of the greatest earthquakes of all time struck south-central Alaska on March 27, 1964. Strong motion lasted longer than for most recorded earthquakes, and more land surface was dislocated, vertically and horizontally, than by any known previous temblor. Never before were so many effects on earth processes and on the works of man available for study by scientists and engineers over so great an area. The seismic vibrations, which directly or indirectly caused most of the damage, were but surface manifestations of a great geologic event-the dislocation of a huge segment of the crust along a deeply buried fault whose nature and even exact location are still subjects for speculation. Not only was the land surface tilted by the great tectonic event beneath it, with resultant seismic sea waves that traversed the entire Pacific, but an enormous mass of land and sea floor moved several tens of feet horizontally toward the Gulf of Alaska. Downslope mass movements of rock, earth, and snow were initiated. Subaqueous slides along lake shores and seacoasts, near-horizontal movements of mobilized soil (“landspreading”), and giant translatory slides in sensitive clay did the most damage and provided the most new knowledge as to the origin, mechanics, and possible means of control or avoidance of such movements. The slopes of most of the deltas that slid in 1964, and that produced destructive local waves, are still as steep or steeper than they were before the earthquake and hence would be unstable or metastable in the event of another great earthquake. Rockslide avalanches provided new evidence that such masses may travel on cushions of compressed air, but a widely held theory that glaciers surge after an earthquake has not been substantiated. Innumerable ground fissures, many of them marked by copious emissions of water, caused much damage in towns and along transportation routes. Vibration also consolidated loose granular materials. In some coastal areas, local

  6. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  7. In the shadow of 1857-the effect of the great Ft. Tejon earthquake on subsequent earthquakes in southern California

    Harris, R.A.; Simpson, R.W.

    1996-01-01

    The great 1857 Fort Tejon earthquake is the largest earthquake to have hit southern California during the historic period. We investigated if seismicity patterns following 1857 could be due to static stress changes generated by the 1857 earthquake. When post-1857 earthquakes with unknown focal mechanisms were assigned strike-slip mechanisms with strike and rake determined by the nearest active fault, 13 of the 13 southern California M???5.5 earthquakes between 1857 and 1907 were encouraged by the 1857 rupture. When post-1857 earthquakes in the Transverse Ranges with unknown focal mechanisms were assigned reverse mechanisms and all other events were assumed strike-slip, 11 of the 13 earthquakes were encouraged by the 1857 earthquake. These results show significant correlations between static stress changes and seismicity patterns. The correlation disappears around 1907, suggesting that tectonic loading began to overwhelm the effect of the 1857 earthquake early in the 20th century.

  8. Recovery process of shear wave velocities of volcanic soil in central Mashiki Town after the 2016 Kumamoto earthquake revealed by intermittent measurements of microtremor

    NASA Astrophysics Data System (ADS)

    Hata, Yoshiya; Yoshimi, Masayuki; Goto, Hiroyuki; Hosoya, Takashi; Morikawa, Hitoshi; Kagawa, Takao

    2017-05-01

    An earthquake of JMA magnitude 6.5 (foreshock) hit Kumamoto Prefecture, Japan, at 21:26 JST on April 14, 2016. Subsequently, an earthquake of JMA magnitude 7.3 (main shock) hit Kumamoto and Oita Prefectures at 1:25 JST on April 16, 2016. The two epicenters were located adjacent to central Mashiki Town, and both events caused significantly strong motions. The heavy damage including collapse of residential houses was concentrated in "Sandwich Area" between Prefectural Route 28 and Akitsu River. During the main shock, we have successfully observed strong motions at TMP03 in Sandwich Area. Simultaneously with installation of the seismograph at TMP03 on April 15, 2016, between the foreshock and the main shock, a microtremor measurement was taken. After the main shock, intermittent measurements of microtremor at TMP03 were also taken within December 6, 2016. As the result, recovery process of shear wave velocities of volcanic soil at TMP03 before/after the main shock was revealed by time history of peak frequencies of the microtremor H/V spectra. Using results of original PS logging tests at proximity site of TMP03 on July 28, 2016, the applicability for the shear wave velocities to TMP03 was then confirmed based on similarity between the theoretical and monitored H/V spectra.

  9. Revisiting the 1872 Owens Valley, California, Earthquake

    Hough, S.E.; Hutton, K.

    2008-01-01

    The 26 March 1872 Owens Valley earthquake is among the largest historical earthquakes in California. The felt area and maximum fault displacements have long been regarded as comparable to, if not greater than, those of the great San Andreas fault earthquakes of 1857 and 1906, but mapped surface ruptures of the latter two events were 2-3 times longer than that inferred for the 1872 rupture. The preferred magnitude estimate of the Owens Valley earthquake has thus been 7.4, based largely on the geological evidence. Reinterpreting macroseismic accounts of the Owens Valley earthquake, we infer generally lower intensity values than those estimated in earlier studies. Nonetheless, as recognized in the early twentieth century, the effects of this earthquake were still generally more dramatic at regional distances than the macroseismic effects from the 1906 earthquake, with light damage to masonry buildings at (nearest-fault) distances as large as 400 km. Macroseismic observations thus suggest a magnitude greater than that of the 1906 San Francisco earthquake, which appears to be at odds with geological observations. However, while the mapped rupture length of the Owens Valley earthquake is relatively low, the average slip was high. The surface rupture was also complex and extended over multiple fault segments. It was first mapped in detail over a century after the earthquake occurred, and recent evidence suggests it might have been longer than earlier studies indicated. Our preferred magnitude estimate is Mw 7.8-7.9, values that we show are consistent with the geological observations. The results of our study suggest that either the Owens Valley earthquake was larger than the 1906 San Francisco earthquake or that, by virtue of source properties and/or propagation effects, it produced systematically higher ground motions at regional distances. The latter possibility implies that some large earthquakes in California will generate significantly larger ground motions than San

  10. A Coupled Earthquake-Tsunami Simulation Framework Applied to the Sumatra 2004 Event

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Bader, Michael; Behrens, Jörn; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Wollherr, Stephanie; van Zelst, Iris

    2017-04-01

    Large earthquakes along subduction zone interfaces have generated destructive tsunamis near Chile in 1960, Sumatra in 2004, and northeast Japan in 2011. In order to better understand these extreme events, we have developed tools for physics-based, coupled earthquake-tsunami simulations. This simulation framework is applied to the 2004 Indian Ocean M 9.1-9.3 earthquake and tsunami, a devastating event that resulted in the loss of more than 230,000 lives. The earthquake rupture simulation is performed using an ADER discontinuous Galerkin discretization on an unstructured tetrahedral mesh with the software SeisSol. Advantages of this approach include accurate representation of complex fault and sea floor geometries and a parallelized and efficient workflow in high-performance computing environments. Accurate and efficient representation of the tsunami evolution and inundation at the coast is achieved with an adaptive mesh discretizing the shallow water equations with a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme. With the application of the framework to this historic event, we aim to better understand the involved mechanisms between the dynamic earthquake within the earth's crust, the resulting tsunami wave within the ocean, and the final coastal inundation process. Earthquake model results are constrained by GPS surface displacements and tsunami model results are compared with buoy and inundation data. This research is part of the ASCETE Project, "Advanced Simulation of Coupled Earthquake and Tsunami Events", funded by the Volkswagen Foundation.

  11. Earthquake Source Mechanics

    NASA Astrophysics Data System (ADS)

    The past 2 decades have seen substantial progress in our understanding of the nature of the earthquake faulting process, but increasingly, the subject has become an interdisciplinary one. Thus, although the observation of radiated seismic waves remains the primary tool for studying earthquakes (and has been increasingly focused on extracting the physical processes occurring in the “source”), geological studies have also begun to play a more important role in understanding the faulting process. Additionally, defining the physical underpinning for these phenomena has come to be an important subject in experimental and theoretical rock mechanics.In recognition of this, a Maurice Ewing Symposium was held at Arden House, Harriman, N.Y. (the former home of the great American statesman Averill Harriman), May 20-23, 1985. The purpose of the meeting was to bring together the international community of experimentalists, theoreticians, and observationalists who are engaged in the study of various aspects of earthquake source mechanics. The conference was attended by more than 60 scientists from nine countries (France, Italy, Japan, Poland, China, the United Kingdom, United States, Soviet Union, and the Federal Republic of Germany).

  12. A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji

    Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.

  13. New geothermal heat extraction process to deliver clean power generation

    McGrail, Pete

    2017-12-27

    A new method for capturing significantly more heat from low-temperature geothermal resources holds promise for generating virtually pollution-free electrical energy. Scientists at the Department of Energys Pacific Northwest National Laboratory will determine if their innovative approach can safely and economically extract and convert heat from vast untapped geothermal resources. The goal is to enable power generation from low-temperature geothermal resources at an economical cost. In addition to being a clean energy source without any greenhouse gas emissions, geothermal is also a steady and dependable source of power.

  14. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  15. Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise

    NASA Astrophysics Data System (ADS)

    Scott, Chelsea Phipps

    geometry and kinematics following the application of atmospheric corrections to an event spanned by real InSAR data, the 1992 M5.6 Little Skull Mountain, Nevada, earthquake. Finally, I discuss how the derived workflow could be applied to other tectonic problems, such as solving for interseismic strain accumulation rates in a subduction zone environment. I also study the evolution of the crustal stress field in the South American plate following two recent great earthquakes along the Nazca- South America subduction zone. I show that the 2010 Mw 8.8 Maule, Chile, earthquake very likely triggered several moderate magnitude earthquakes in the Andean volcanic arc and backarc. This suggests that great earthquakes modulate the crustal stress field outside of the immediate aftershock zone and that far-field faults may pose a heightened hazard following large subduction earthquakes. The 2014 Mw 8.1 Pisagua, Chile, earthquake reopened ancient surface cracks that have been preserved in the hyperarid forearc setting of northern Chile for thousands of earthquake cycles. The orientation of cracks reopened in this event reflects the static and likely dynamic stresses generated by the recent earthquake. Coseismic cracks serve as a reliable marker of permanent earthquake deformation and plate boundary behavior persistent over the million-year timescale. This work on great earthquakes suggests that InSAR observations can play a crucial role in furthering our understanding of the crustal mechanics that drive seismic cycle processes in subduction zones.

  16. PROCESS HEAT GENERATION AND CONSUMPTION, 1939 TO 1967

    SciT

    Prehn, W.L. Jr.; Tarrice, R.R.

    A survey and analysis of the generation and use of heat in manufacturing has been completed. The greatest emphasis has been placed on the variety of heat applications in United States manufacturing industries with some discussion of other important uses. The generation of electricity is excluded from this analysis. The generation of heat through steam production and through directfiring means is analyzed and described in terms of the major economic factors dictating application and possible growth. These factors include: geography, fuel, industry growth, cost, heat quality, generating unit size, and other contributing elements. Some data are given on similar mattersmore » in foreign countries. Only those countries which are important in terms of industrial activity are considered. A projection of demand for industrial heat in the categories studied is shown for the next five years and the next ten years. It is concluded that certain portions of the industrial complex of the world are sufficiently important in terms of the use of heat that further detailed study of the above factors is well justified. (auth)« less

  17. Strategic Processing and Predictive Inference Generation in L2 Reading

    ERIC Educational Resources Information Center

    Nahatame, Shingo

    2014-01-01

    Predictive inference is the anticipation of the likely consequences of events described in a text. This study investigated predictive inference generation during second language (L2) reading, with a focus on the effects of strategy instructions. In this experiment, Japanese university students read several short narrative passages designed to…

  18. Earthquake source properties from pseudotachylite

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow

  19. Permeability, storage and hydraulic diffusivity controlled by earthquakes

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Fulton, P. M.; Xue, L.

    2016-12-01

    Earthquakes can increase permeability in fractured rocks. In the farfield, such permeability increases are attributed to seismic waves and can last for months after the initial earthquake. Laboratory studies suggest that unclogging of fractures by the transient flow driven by seismic waves is a viable mechanism. These dynamic permeability increases may contribute to permeability enhancement in the seismic clouds accompanying hydraulic fracking. Permeability enhancement by seismic waves could potentially be engineered and the experiments suggest the process will be most effective at a preferred frequency. We have recently observed similar processes inside active fault zones after major earthquakes. A borehole observatory in the fault that generated the M9.0 2011 Tohoku earthquake reveals a sequence of temperature pulses during the secondary aftershock sequence of an M7.3 aftershock. The pulses are attributed to fluid advection by a flow through a zone of transiently increased permeability. Directly after the M7.3 earthquake, the newly damaged fault zone is highly susceptible to further permeability enhancement, but ultimately heals within a month and becomes no longer as sensitive. The observation suggests that the newly damaged fault zone is more prone to fluid pulsing than would be expected based on the long-term permeability structure. Even longer term healing is seen inside the fault zone of the 2008 M7.9 Wenchuan earthquake. The competition between damage and healing (or clogging and unclogging) results in dynamically controlled permeability, storage and hydraulic diffusivity. Recent measurements of in situ fault zone architecture at the 1-10 meter scale suggest that active fault zones often have hydraulic diffusivities near 10-2 m2/s. This uniformity is true even within the damage zone of the San Andreas fault where permeability and storage increases balance each other to achieve this value of diffusivity over a 400 m wide region. We speculate that fault zones

  20. Category Cued Recall Evokes a Generate-Recognize Retrieval Process

    ERIC Educational Resources Information Center

    Hunt, R. Reed; Smith, Rebekah E.; Toth, Jeffrey P.

    2016-01-01

    The experiments reported here were designed to replicate and extend McCabe, Roediger, and Karpicke's (2011) finding that retrieval in category cued recall involves both controlled and automatic processes. The extension entailed identifying whether distinctive encoding affected 1 or both of these 2 processes. The first experiment successfully…

  1. Time Delay Measurements of Key Generation Process on Smart Cards

    DTIC Science & Technology

    2015-03-01

    random number generator is available (Chatterjee & Gupta, 2009). The ECC algorithm will grow in usage as information becomes more and more secure. Figure...Worldwide Mobile Enterprise Security Software 2012–2016 Forecast and Analysis), mobile identity and access management is expected to grow by 27.6 percent...iPad, tablets) as well as 80000 BlackBerry phones. The mobility plan itself will be deployed in three phases over 2014, with the first phase

  2. Postseismic deformation following the 2015 Mw 7.8 Gorkha earthquake and the distribution of brittle and ductile crustal processes beneath Nepal

    NASA Astrophysics Data System (ADS)

    Moore, James; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valere; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Qiang, Qiu

    2017-04-01

    Studies of geodetic data across the earthquake cycle indicate a wide range of mechanisms contribute to cycles of stress buildup and relaxation. Both on-fault rate and state friction and off-fault rheologies can contribute to the observed deformation; in particular, the postseismic transient phase of the earthquake cycle. One problem with many of these models is that there is a wide range of parameter space to be investigated, with each parameter pair possessing their own tradeoffs. This becomes especially problematic when trying to model both on-fault and off-fault deformation simultaneously. The computational time to simulate these processes simultaneously using finite element and spectral methods can restrict parametric investigations. We present a novel approach to simulate on-fault and off-fault deformation simultaneously using analytical Green's functions for distributed deformation at depth [Barbot, Moore and Lambert., 2016]. This allows us to jointly explore dynamic frictional properties on the fault, and the plastic properties of the bulk rocks (including grain size and water distribution) in the lower crust with low computational cost. These new displacement and stress Green's functions can be used for both forward and inverse modelling of distributed shear, where the calculated strain-rates can be converted to effective viscosities. Here, we draw insight from the postseismic geodetic observations following the 2015 Mw 7.8 Gorkha earthquake. We forward model afterslip using rate and state friction on the megathrust geometry with the two ramp-décollement system presented by Hubbard et al., (pers. comm., 2015) and viscoelastic relaxation using recent experimentally derived flow laws with transient rheology and the thermal structure from [Cattin et al., 2001]. The calculated strain-rates can be converted to effective viscosities. The postseismic deformation brings new insights into the distribution of brittle and ductile crustal processes beneath Nepal

  3. Initiation process of the Mw 6.2 central Tottori, Japan, earthquake on October 21, 2016: Stress transfer due to its largest foreshock of Mw 4.1

    NASA Astrophysics Data System (ADS)

    Noda, S.; Ellsworth, W. L.

    2017-12-01

    On October 21, 2016, a strike-slip earthquake with Mw 6.2 occurred in the central Tottori prefecture, Japan. It was preceded by a foreshock sequence that began with a Mw 4.1 event, the largest earthquake for the sequence, and lasted about two hours. According to the JMA catalog, the largest foreshock had a similar focal mechanism as the mainshock and was located in the immediate vicinity of the mainshock hypocenter. The goal of this study is to understand the relationship between the foreshock and the initial rupture of the mainshock. We first determine the relative hypocenter distance between the foreshock and mainshock using the P-wave onsets on Hi-net station records. The initiation points of the two events are likely about 100 m apart according to the current results, but could be closer. Within the location uncertainty, they might either be coplanar or on subparallel planes. Next, we obtain the slip-history models from a kinematic inversion method using empirical Green's functions derived from other foreshocks with M 2.2 - 2.4. The Mw 4.1 foreshock and Mw 6.2 mainshock started in a similar way until approximately 0.2 s after their onsets. For the foreshock, the rapid growth stage completed by 0.2 s even though the rupture propagation continued for 0.4 - 0.5 s longer (note that 0.2 s is significantly shorter than the typical source duration of a Mw 4.1 earthquake). On the other hand, the mainshock rupture continued to grow rapidly after 0.2 s. The comparison between the relative hypocenter locations and the slip models shows that the mainshock nucleated within the area strongly effected by both static and dynamic stress changes created by the foreshock. We also find that the mainshock initially propagated away from the foreshock hypocenter. We consider that the stress transfer process is a key to understand the mainshock nucleation as well as its rupture growth process.

  4. Identification of Deep Earthquakes

    DTIC Science & Technology

    2010-09-01

    discriminants that will reliably separate small, crustal earthquakes (magnitudes less than about 4 and depths less than about 40 to 50 km) from small...characteristics on discrimination plots designed to separate nuclear explosions from crustal earthquakes. Thus, reliably flagging these small, deep events is...Further, reliably identifying subcrustal earthquakes will allow us to eliminate deep events (previously misidentified as crustal earthquakes) from

  5. Large earthquakes and creeping faults

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  6. Seismicity and seismic hazard in Sabah, East Malaysia from earthquake and geodetic data

    NASA Astrophysics Data System (ADS)

    Gilligan, A.; Rawlinson, N.; Tongkul, F.; Stephenson, R.

    2017-12-01

    While the levels of seismicity are low in most of Malaysia, the state of Sabah in northern Borneo has moderate levels of seismicity. Notable earthquakes in the region include the 1976 M6.2 Lahad Datu earthquake and the 2015 M6 Ranau earthquake. The recent Ranau earthquake resulted in the deaths of 18 people on Mt Kinabalu, an estimated 100 million RM ( US$23 million) damage to buildings, roads, and infrastructure from shaking, and flooding, reduced water quality, and damage to farms from landslides. Over the last 40 years the population of Sabah has increased to over four times what it was in 1976, yet seismic hazard in Sabah remains poorly understood. Using seismic and geodetic data we hope to better quantify the hazards posed by earthquakes in Sabah, and thus help to minimize risk. In order to do this we need to know about the locations of earthquakes, types of earthquakes that occur, and faults that are generating them. We use data from 15 MetMalaysia seismic stations currently operating in Sabah to develop a region-specific velocity model from receiver functions and a pre-existing surface wave model. We use this new velocity model to (re)locate earthquakes that occurred in Sabah from 2005-2016, including a large number of aftershocks from the 2015 Ranau earthquake. We use a probabilistic nonlinear earthquake location program to locate the earthquakes and then refine their relative locations using a double difference method. The recorded waveforms are further used to obtain moment tensor solutions for these earthquakes. Earthquake locations and moment tensor solutions are then compared with the locations of faults throughout Sabah. Faults are identified from high-resolution IFSAR images and subsequent fieldwork, with a particular focus on the Lahad Datau and Ranau areas. Used together, these seismic and geodetic data can help us to develop a new seismic hazard model for Sabah, as well as aiding in the delivery of outreach activities regarding seismic hazard

  7. Rupture Processes of the Mw8.3 Sea of Okhotsk Earthquake and Aftershock Sequences from 3-D Back Projection Imaging

    NASA Astrophysics Data System (ADS)

    Jian, P. R.; Hung, S. H.; Meng, L.

    2014-12-01

    On May 24, 2013, the largest deep earthquake ever recorded in history occurred on the southern tip of the Kamchatka Island, where the Pacific Plate subducts underneath the Okhotsk Plate. Previous 2D beamforming back projection (BP) of P- coda waves suggests the mainshock ruptured bilaterally along a horizontal fault plane determined by the global centroid moment tensor solution. On the other hand, the multiple point source inversion of P and SH waveforms argued that the earthquake comprises a sequence of 6 subevents not located on a single plane but actually distributed in a zone that extends 64 km horizontally and 35 km in depth. We then apply a three-dimensional MUSIC BP approach to resolve the rupture processes of the manishock and two large aftershocks (M6.7) with no a priori setup of preferential orientations of the planar rupture. The maximum pseudo-spectrum of high-frequency P wave in a sequence of time windows recorded by the densely-distributed stations from US and EU Array are used to image 3-D temporal and spatial rupture distribution. The resulting image confirms that the nearly N-S striking but two antiparallel rupture stages. The first subhorizontal rupture initially propagates toward the NNE direction, while at 18 s later it directs reversely to the SSW and concurrently shifts downward to 35 km deeper lasting for about 20 s. The rupture lengths in the first NNE-ward and second SSW-ward stage are about 30 km and 85 km; the estimated rupture velocities are 3 km/s and 4.25 km/s, respectively. Synthetic experiments are undertaken to assess the capability of the 3D MUSIC BP for the recovery of spatio-temporal rupture processes. Besides, high frequency BP images based on the EU-Array data show two M6.7 aftershocks are more likely to rupture on the vertical fault planes.

  8. Detection of dominant runoff generation processes for catchment classification

    NASA Astrophysics Data System (ADS)

    Gioia, A.; Manfreda, S.; Iacobellis, V.; Fiorentino, M.

    2009-04-01

    The identification of similar hydroclimatic regions in order to reduce the uncertainty on flood prediction in ungauged basins, represents one of the most exciting challenges faced by hydrologists in the last few years (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al. [2003]). In this context, the investigation of the dominant runoff generation mechanisms may provide a strategy for catchment classification and identification of hydrologically homogeneous group of basins. In particular, the present study focuses on two classical schemes responsible of runoff production: saturation and infiltration excess. Thus, in principle, the occurrence of either mechanism may be detected in the same basin according to the climatic forcing. Here the dynamics of runoff generation are investigated over a set of basins in order to identify the dynamics which are responsible of the transition between the two schemes and to recognize homogeneous group of basins. We exploit a basin characterization obtained by means of a theoretical flood probability distribution, which was applied on a broad number of arid and humid river basins belonging to the Southern Italy region, with aim to describe the effect of different runoff production mechanisms in the generation of ordinary and extraordinary flood events. Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S. and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003-2012: Shaping an exciting future for the hydrological sciences, Hydrol. Sci. J., 48(6), 857-880, 2003.

  9. Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake

    Hayes, Gavin P.

    2011-01-01

    On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.

  10. Identified EM Earthquake Precursors

    NASA Astrophysics Data System (ADS)

    Jones, Kenneth, II; Saxton, Patrick

    2014-05-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for

  11. Submarine slope earthquake-induced instability and associated tsunami generation potential along the Hyblean-Malta Escarpment (offshore eastern Sicily, Italy)

    NASA Astrophysics Data System (ADS)

    Ausilia Paparo, Maria; Pagnoni, Gianluca; Zaniboni, Filippo; Tinti, Stefano

    2016-04-01

    The stability analysis of offshore margins is an important step for the assessment of natural hazard: the main challenge is to evaluate the potential slope failures and the consequent occurrence of submarine tsunamigenic landslides to mitigate the potential coastal damage to inhabitants and infrastructures. But the limited geotechnical knowledge of the underwater soil and the controversial scientific interpretation of the tectonic units make it often difficult to carry out this type of analysis reliably. We select the Hyblean-Malta Escarpment (HME), the main active geological structure offshore eastern Sicily, because the amount of data from historical chronicles, the records about strong earthquakes and tsunami, and the numerous geological offshore surveys carried out in recent years make the region an excellent scenario to evaluate slope failures, mass movements triggered by earthquakes and the consequent tsunamis. We choose several profiles along the HME and analyse their equilibrium conditions using the Minimun Lithostatic Deviation (MLD) method (Tinti and Manucci, 2006, 2008; Paparo et al. 2013), that is based on the limit-equilibrium theory. Considering the morphological and geotechnical features of the offshore slopes, we prove that large-earthquake shaking may lead some zones of the HME to instability, we evaluate the expected volumes involved in sliding and compute the associated landslide-tsunami through numerical tsunami simulations. This work was carried out in the frame of the EU Project called ASTARTE - Assessment, STrategy And Risk Reduction for Tsunamis in Europe (Grant 603839, 7th FP, ENV.2013.6.4-3).

  12. Housing Damage Following Earthquake

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.

  13. Sand Volcano Following Earthquake

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Sand boil or sand volcano measuring 2 m (6.6 ft.) in length erupted in median of Interstate Highway 80 west of the Bay Bridge toll plaza when ground shaking transformed loose water-saturated deposit of subsurface sand into a sand-water slurry (liquefaction) in the October 17, 1989, Loma Prieta earthquake. Vented sand contains marine-shell fragments. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: J.C. Tinsley, U.S. Geological Survey)

  14. A next generation processing system for edging and trimming

    A. Lynn Abbott; Daniel L. Schmoldt; Philip A. Araman

    2000-01-01

    This paper describes a prototype scanning system that is being developed for the processing of rough hardwood lumber. The overall goal of the system is to automate the selection of cutting positions for the edges and ends of rough, green lumber. Such edge and trim cuts are typically performed at sawmills in an effort to increase board value prior to sale, and this...

  15. Toward a Generative Model of the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    McMullen, David W.

    Until the rise of cognitive psychology, models of the teaching-learning process (TLP) stressed external rather than internal variables. Models remained general descriptions until control theory introduced explicit system analyses. Cybernetic models emphasize feedback and adaptivity but give little attention to creativity. Research on artificial…

  16. A Process for Reviewing and Evaluating Generated Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2016-01-01

    Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…

  17. Generation of Suprathermal Electrons by Collective Processes in Collisional Plasma

    NASA Astrophysics Data System (ADS)

    Tigik, S. F.; Ziebell, L. F.; Yoon, P. H.

    2017-11-01

    The ubiquity of high-energy tails in the charged particle velocity distribution functions (VDFs) observed in space plasmas suggests the existence of an underlying process responsible for taking a fraction of the charged particle population out of thermal equilibrium and redistributing it to suprathermal velocity and energy ranges. The present Letter focuses on a new and fundamental physical explanation for the origin of suprathermal electron velocity distribution function (EVDF) in a collisional plasma. This process involves a newly discovered electrostatic bremsstrahlung (EB) emission that is effective in a plasma in which binary collisions are present. The steady-state EVDF dictated by such a process corresponds to a Maxwellian core plus a quasi-inverse power-law tail, which is a feature commonly observed in many space plasma environments. In order to demonstrate this, the system of self-consistent particle- and wave-kinetic equations are numerically solved with an initially Maxwellian EVDF and Langmuir wave spectral intensity, which is a state that does not reflect the presence of EB process, and hence not in force balance. The EB term subsequently drives the system to a new force-balanced steady state. After a long integration period it is demonstrated that the initial Langmuir fluctuation spectrum is modified, which in turn distorts the initial Maxwellian EVDF into a VDF that resembles the said core-suprathermal VDF. Such a mechanism may thus be operative at the coronal source region, which is characterized by high collisionality.

  18. Low work function, stable compound clusters and generation process

    DOEpatents

    Dinh, Long N.; Balooch, Mehdi; Schildbach, Marcus A.; Hamza, Alex V.; McLean, II, William

    2000-01-01

    Low work function, stable compound clusters are generated by co-evaporation of a solid semiconductor (i.e., Si) and alkali metal (i.e., Cs) elements in an oxygen environment. The compound clusters are easily patterned during deposition on substrate surfaces using a conventional photo-resist technique. The cluster size distribution is narrow, with a peak range of angstroms to nanometers depending on the oxygen pressure and the Si source temperature. Tests have shown that compound clusters when deposited on a carbon substrate contain the desired low work function property and are stable up to 600.degree. C. Using the patterned cluster containing plate as a cathode baseplate and a faceplate covered with phosphor as an anode, one can apply a positive bias to the faceplate to easily extract electrons and obtain illumination.

  19. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  20. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  1. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  2. The U.S. Earthquake Prediction Program

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  3. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  4. The CREST Simulation Development Process: Training the Next Generation.

    PubMed

    Sweet, Robert M

    2017-04-01

    The challenges of training and assessing endourologic skill have driven the development of new training systems. The Center for Research in Education and Simulation Technologies (CREST) has developed a team and a methodology to facilitate this development process. Backwards design principles were applied. A panel of experts first defined desired clinical and educational outcomes. Outcomes were subsequently linked to learning objectives. Gross task deconstruction was performed, and the primary domain was classified as primarily involving decision-making, psychomotor skill, or communication. A more detailed cognitive task analysis was performed to elicit and prioritize relevant anatomy/tissues, metrics, and errors. Reference anatomy was created using a digital anatomist and clinician working off of a clinical data set. Three dimensional printing can facilitate this process. When possible, synthetic or virtual tissue behavior and textures were recreated using data derived from human tissue. Embedded sensors/markers and/or computer-based systems were used to facilitate the collection of objective metrics. A learning Verification and validation occurred throughout the engineering development process. Nine endourology-relevant training systems were created by CREST with this approach. Systems include basic laparoscopic skills (BLUS), vesicourethral anastomosis, pyeloplasty, cystoscopic procedures, stent placement, rigid and flexible ureteroscopy, GreenLight PVP (GL Sim), Percutaneous access with C-arm (CAT), Nephrolithotomy (NLM), and a vascular injury model. Mixed modalities have been used, including "smart" physical models, virtual reality, augmented reality, and video. Substantial validity evidence for training and assessment has been collected on systems. An open source manikin-based modular platform is under development by CREST with the Department of Defense that will unify these and other commercial task trainers through the common physiology engine, learning

  5. Building Big Flares: Constraining Generating Processes of Solar Flare Distributions

    NASA Astrophysics Data System (ADS)

    Wyse Jackson, T.; Kashyap, V.; McKillop, S.

    2015-12-01

    We address mechanisms which seek to explain the observed solar flare distribution, dN/dE ~ E1.8. We have compiled a comprehensive database, from GOES, NOAA, XRT, and AIA data, of solar flares and their characteristics, covering the year 2013. These datasets allow us to probe how stored magnetic energy is released over the course of an active region's evolution. We fit power-laws to flare distributions over various attribute groupings. For instance, we compare flares that occur before and after an active region reaches its maximum area, and show that the corresponding flare distributions are indistinguishable; thus, the processes that lead to magnetic reconnection are similar in both cases. A turnover in the distribution is not detectable at the energies accessible to our study, suggesting that a self-organized critical (SOC) process is a valid mechanism. However, we find changes in the distributions that suggest that the simple picture of an SOC where flares draw energy from an inexhaustible reservoir of stored magnetic energy is incomplete. Following the evolution of the flare distribution over the lifetimes of active regions, we find that the distribution flattens with time, and for larger active regions, and that a single power-law model is insufficient. This implies that flares that occur later in the lifetime of the active region tend towards higher energies. We conclude that the SOC process must have an upper bound. Increasing the scope of the study to include data from other years and more instruments will increase the robustness of these results. This work was supported by the NSF-REU Solar Physics Program at SAO, grant number AGS 1263241, NASA Contract NAS8-03060 to the Chandra X-ray Center and by NASA Hinode/XRT contract NNM07AB07C to SAO

  6. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic risk to earthquakes are increasing steadily as urbanization and development occupy more areas that a prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The earthquake scenarios are intended as a basic input for developing detailed earthquake damage scenarios for the cities and can be used in earthquake-safe town and infrastructure planning. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia area that is the most populated (the population is of more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=9-10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK). Almost a century later (95 years) an earthquake of moment magnitude 5.6 (I0=7-8 MSK) hit the city of Sofia, on May 22nd, 2012. In the present study as a deterministic scenario event is considered a damaging earthquake with higher probability of occurrence that could affect the city with intensity less than or equal to VIII

  7. Applications of MICP source for next-generation photomask process

    NASA Astrophysics Data System (ADS)

    Kwon, Hyuk-Joo; Chang, Byung-Soo; Choi, Boo-Yeon; Park, Kyung H.; Jeong, Soo-Hong

    2000-07-01

    As critical dimensions of photomask extends into submicron range, critical dimension uniformity, edge roughness, macro loading effect, and pattern slope become tighter than before. Fabrication of photomask relies on the ability to pattern features with anisotropic profile. To improve critical dimension uniformity, dry etcher is one of the solution and inductively coupled plasma (ICP) sources have become one of promising high density plasma sources for dry etcher. In this paper, we have utilized dry etcher system with multi-pole ICP source for Cr etch and MoSi etch and have investigated critical dimension uniformity, slope, and defects. We will present dry etch process data by process optimization of newly designed dry etcher system. The designed pattern area is 132 by 132 mm2 with 23 by 23 matrix test patterns. 3 (sigma) of critical dimension uniformity is below 12 nm at 0.8 - 3.0 micrometers . In most cases, we can obtain zero defect masks which is operated by face- down loading.

  8. High levels of melatonin generated during the brewing process.

    PubMed

    Garcia-Moreno, H; Calvo, J R; Maldonado, M D

    2013-08-01

    Beer is a beverage consumed worldwide. It is produced from cereals (barley or wheat) and contains a wide array of bioactive phytochemicals and nutraceutical compounds. Specifically, high melatonin concentrations have been found in beer. Beers with high alcohol content are those that present the greatest concentrations of melatonin and vice versa. In this study, gel filtration chromatography and ELISA were combined for melatonin determination. We brewed beer to determine, for the first time, the beer production steps in which melatonin appears. We conclude that the barley, which is malted and ground in the early process, and the yeast, during the second fermentation, are the largest contributors to the enrichment of the beer with melatonin. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Experiences with industrial solar process steam generation in Jordan

    NASA Astrophysics Data System (ADS)

    Krüger, Dirk; Berger, Michael; Mokhtar, Marwan; Willwerth, Lisa; Zahler, Christian; Al-Najami, Mahmoud; Hennecke, Klaus

    2017-06-01

    At the Jordanian pharmaceuticals manufacturing company RAM Pharma a solar process heat supply has been constructed by Industrial Solar GmbH in March 2015 and operated since then (Figure 1). The collector field consists of 394 m² of linear Fresnel collectors supplying saturated steam to the steam network at RAM Pharma at about 6 bar gauge. In the frame of the SolSteam project funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) the installation has been modified introducing an alternative way to separate water and steam by a cyclone. This paper describes the results of experiments with the cyclone and compares the operation with a steam drum. The steam production of the solar plant as well as the fuel demand of the steam boiler are continuously monitored and results are presented in this paper.

  10. Process for control of pollutants generated during coal gasification

    DOEpatents

    Frumerman, Robert; Hooper, Harold M.

    1979-01-01

    The present invention is directed to an improvement in the coal gasification process that effectively eliminates substantially all of the environmental pollutants contained in the producer gas. The raw producer gas is passed through a two-stage water scrubbing arrangement with the tars being condensed essentially water-free in the first stage and lower boiling condensables, including pollutant laden water, being removed in the second stage. The pollutant-laden water is introduced into an evaporator in which about 95 percent of the water is vaporized and introduced as steam into the gas producer. The condensed tars are combusted and the resulting products of combustion are admixed with the pollutant-containing water residue from the evaporator and introduced into the gas producer.

  11. Ionospheric total electron content seismo-perturbation after Japan's March 11, 2011, M=9.0 Tohoku earthquake under a geomagnetic storm; a nonlinear principal component analysis

    NASA Astrophysics Data System (ADS)

    Lin, Jyh-Woei

    2012-10-01

    Nonlinear principal component analysis (NLPCA) is implemented to analyze the spatial pattern of total electron content (TEC) anomalies 3 hours after Japan's Tohoku earthquake that occurred at 05:46:23 on 11 March, 2011 (UTC) ( M w =9). A geomagnetic storm was in progress at the time of the earthquake. NLPCA and TEC data processing were conducted on the global ionospheric map (GIM) for the time between 08:30 to 09:30 UTC, about 3 hours after this devastating earthquake and ensuing tsunami. Analysis results show stark earthquake-associated TEC anomalies that are widespread, and appear to have been induced by two acoustic gravity waves due to strong shaking (vertical acoustic wave) and the generation of the tsunami (horizontal Rayleigh mode gravity wave). The TEC anomalies roughly fit the initial mainshock and movement of the tsunami. Observation of the earthquake-associated TEC anomalies does not appear to be affected by a contemporaneous geomagnetic storm.

  12. Precise hypocenter distribution and earthquake generating and stress in and around the upper-plane seismic belt in the subducting Pacific slab beneath NE Japan

    NASA Astrophysics Data System (ADS)

    Kita, S.; Okada, T.; Nakajima, J.; Matsuzawa, T.; Uchida, N.; Hasegawa, A.

    2007-12-01

    1. Introduction We found an intraslab seismic belt (upper-plane seismic belt) in the upper plane of the double seismic zone within the Pacific slab, running interface at depths of 70-100km beneath the forearc area. The location of the deeper limits of this belt appears to correspond to one of the facies boundaries (from jadeite lawsonite blueschist to lawsonite amphibole eclogite) in the oceanic crust [Kita et al., 2006, GRL]. In this study, we precisely relocated intraslab earthquakes by using travel time differences calculated by the waveform cross-spectrum analysis to obtain more detailed distribution of the upper plane-seismic belt within the Pacific slab beneath NE Japan. We also discuss the stress field in the slab by examining focal mechanisms of the earthquakes. 2. Data and Method We relocated events at depths of 50-00 km for the period from March 2003 to November 2006 from the JMA earthquake catalog. We applied the double-difference hypocenter location method (DDLM) by Waldhauser and Ellsworth (2000) to the arrival time data of the events. We use relative earthquake arrival times determined both by the waveform cross-spectrum analysis and by the catalog-picking data. We also determine focal mechanisms using the P wave polarity. 3. Spatial distribution of relocated hypocenters In the upper portion of the slab crust, seismicity is very active and distributed relatively homogeneously at depths of about 70-100km parallel to the volcanic front, where the upper-plane seismic belt has been found. In the lower portion of slab crust and/or the uppermost portion of the slab mantle, seismicity is spatially very limited to some small areas (each size is about 20 x 20km) at depths around 65km. Two of them correspond to the aftershock area of the 2003 Miyagi (M7.1) intraslab earthquake and that of the 1987 Iwaizumi (M6.6) intraslab earthquake, respectively. Based on the dehydration embrittelment hypothesis, the difference of the spatial distribution of the seismicity in

  13. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  14. Development of a borehole stress meter for studying earthquake predictions and rock mechanics, and stress seismograms of the 2011 Tohoku earthquake ( M 9.0)

    NASA Astrophysics Data System (ADS)

    Ishii, Hiroshi; Asai, Yasuhiro

    2015-02-01

    Although precursory signs of an earthquake can occur before the event, it is difficult to observe such signs with precision, especially on earth's surface where artificial noise and other factors complicate signal detection. One possible solution to this problem is to install monitoring instruments into the deep bedrock where earthquakes are likely to begin. When evaluating earthquake occurrence, it is necessary to elucidate the processes of stress accumulation in a medium and then release as a fault (crack) is generated, and to do so, the stress must be observed continuously. However, continuous observations of stress have not been implemented yet for earthquake monitoring programs. Strain is a secondary physical quantity whose variation varies depending on the elastic coefficient of the medium, and it can yield potentially valuable information as well. This article describes the development of a borehole stress meter that is capable of recording both continuous stress and strain at a depth of about 1 km. Specifically, this paper introduces the design principles of the stress meter as well as its actual structure. It also describes a newly developed calibration procedure and the results obtained to date for stress and strain studies of deep boreholes at three locations in Japan. To show examples of the observations, records of stress seismic waveforms generated by the 2011 Tohoku earthquake ( M 9.0) are presented. The results demonstrate that the stress meter data have sufficient precision and reliability.

  15. Features on Venus generated by plate boundary processes

    NASA Technical Reports Server (NTRS)

    Mckenzie, Dan; Ford, Peter G.; Johnson, Catherine; Parsons, Barry; Sandwell, David; Saunders, Stephen; Solomon, Sean C.

    1992-01-01

    Various observations suggest that there are processes on Venus that produce features similar to those associated with plate boundaries on earth. Synthetic aperture radar images of Venus, taken with a radar whose wavelength is 12.6 cm, are compared with GLORIA images of active plate boundaries, obtained with a sound source whose wavelength is 23 cm. Features similar to transform faults and to abyssal hills on slow and fast spreading ridges can be recognized within the Artemis region of Venus but are not clearly visible elsewhere. The composition of the basalts measured by the Venera 13 and 14 and the Vega 2 spacecraft corresponds to that expected from adiabatic decompression, like that which occurs beneath spreading ridges on earth. Structures that resemble trenches are widespread on Venus and show the same curvature and asymmetry as they do on earth. These observations suggest that the same simple geophysical models that have been so successfully used to understand the tectonics of earth can also be applied to Venus.

  16. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  17. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  18. Transient triggering of near and distant earthquakes

    Gomberg, J.; Blanpied, M.L.; Beeler, N.M.

    1997-01-01

    We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i

  19. Triggering of repeating earthquakes in central California

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul

    2014-01-01

    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  20. Laser materials processing of complex components: from reverse engineering via automated beam path generation to short process development cycles

    NASA Astrophysics Data System (ADS)

    Görgl, Richard; Brandstätter, Elmar

    2017-01-01

    The article presents an overview of what is possible nowadays in the field of laser materials processing. The state of the art in the complete process chain is shown, starting with the generation of a specific components CAD data and continuing with the automated motion path generation for the laser head carried by a CNC or robot system. Application examples from laser cladding and laser-based additive manufacturing are given.

  1. Facilitation of intermediate-depth earthquakes by eclogitization-related stresses and H2O

    NASA Astrophysics Data System (ADS)

    Nakajima, J.; Uchida, N.; Hasegawa, A.; Shiina, T.; Hacker, B. R.; Kirby, S. H.

    2012-12-01

    Generation of intermediate-depth earthquakes is an ongoing enigma because high lithostatic pressures render ordinary dry frictional failure unlikely. A popular hypothesis to solve this conundrum is fluid-related embrittlement (e.g., Kirby et al., 1996; Preston et al., 2003), which is known to work even for dehydration reactions with negative volume change (Jung et al., 2004). One consequence of reaction with the negative volume change is the formation of a paired stress field as a result of strain compatibility across the reaction front (Hacker, 1996; Kirby et al., 1996). Here we analyze waveforms of a tiny seismic cluster in the lower crust of the downgoing Pacific plate at a depth of 155 km and propose new evidence in favor of this mechanism: tensional earthquakes lying 1 km above compressional earthquakes, and earthquakes with highly similar waveforms lying on well-defined planes with complementary rupture areas. The tensional stress is interpreted to be caused by the dimensional mismatch between crust transformed to eclogite and underlying untransformed crust, and the earthquakes are interpreted to be facilitated by fluid produced by eclogitization. These observations provide seismic evidence for the dual roles of volume-change related stresses and fluid-related embrittlement as viable processes for nucleating earthquakes in downgoing oceanic lithosphere.

  2. Correlation between elastic energy density and deep earthquakes distribution

    NASA Astrophysics Data System (ADS)

    Gunawardana, P. M.; Morra, G.

    2017-05-01

    The mechanism at the origin of the earthquakes below 30 km remains elusive as these events cannot be explained by brittle frictional processes. In this work we focus on the global total distribution of earthquakes frequency vs. depth from ∼50 km to 670 km depth. We develop a numerical model of self-driven subduction by solving the non-homogeneous Stokes equation using the ;Particle in cell method; in combination with a conservative finite difference scheme, here solved for the first time using Python and NumPy only. We show that most of the elastic energy is stored in the slab core and that it is strongly correlated with the earthquake frequency-depth distribution for a wide range of lithosphere and lithosphere-core viscosities. According to our results, we suggest that 1) slab bending at the bottom of the upper mantle causes the peak of the earthquake frequency-depth distribution that is observed at mantle transition depth; 2) the presence of a high viscous stiff core inside the lithosphere generates an elastic energy distribution that fits better with the exponential decay that is observed at intermediate depth.

  3. Joint inversion of GNSS and teleseismic data for the rupture process of the 2017 M w6.5 Jiuzhaigou, China, earthquake

    NASA Astrophysics Data System (ADS)

    Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie

    2018-05-01

    The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m ( M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.

  4. Joint inversion of GNSS and teleseismic data for the rupture process of the 2017 M w6.5 Jiuzhaigou, China, earthquake

    NASA Astrophysics Data System (ADS)

    Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie

    2018-02-01

    The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m (M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.

  5. Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.

    2016-12-01

    During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.

  6. Earthquake and Tsunami booklet based on two Indonesia earthquakes

    NASA Astrophysics Data System (ADS)

    Hayashi, Y.; Aci, M.

    2014-12-01

    Many destructive earthquakes occurred during the last decade in Indonesia. These experiences are very important precepts for the world people who live in earthquake and tsunami countries. We are collecting the testimonies of tsunami survivors to clarify successful evacuation process and to make clear the characteristic physical behaviors of tsunami near coast. We research 2 tsunami events, 2004 Indian Ocean tsunami and 2010 Mentawai slow earthquake tsunami. Many video and photographs were taken by people at some places in 2004 Indian ocean tsunami disaster; nevertheless these were few restricted points. We didn't know the tsunami behavior in another place. In this study, we tried to collect extensive information about tsunami behavior not only in many places but also wide time range after the strong shake. In Mentawai case, the earthquake occurred in night, so there are no impressive photos. To collect detail information about evacuation process from tsunamis, we contrived the interview method. This method contains making pictures of tsunami experience from the scene of victims' stories. In 2004 Aceh case, all survivors didn't know tsunami phenomena. Because there were no big earthquakes with tsunami for one hundred years in Sumatra region, public people had no knowledge about tsunami. This situation was highly improved in 2010 Mentawai case. TV programs and NGO or governmental public education programs about tsunami evacuation are widespread in Indonesia. Many people know about fundamental knowledge of earthquake and tsunami disasters. We made drill book based on victim's stories and painted impressive scene of 2 events. We used the drill book in disaster education event in school committee of west Java. About 80 % students and teachers evaluated that the contents of the drill book are useful for correct understanding.

  7. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  8. POST Earthquake Debris Management — AN Overview

    NASA Astrophysics Data System (ADS)

    Sarkar, Raju

    Every year natural disasters, such as fires, floods, earthquakes, hurricanes, landslides, tsunami, and tornadoes, challenge various communities of the world. Earthquakes strike with varying degrees of severity and pose both short- and long-term challenges to public service providers. Earthquakes generate shock waves and displace the ground along fault lines. These seismic forces can bring down buildings and bridges in a localized area and damage buildings and other structures in a far wider area. Secondary damage from fires, explosions, and localized flooding from broken water pipes can increase the amount of debris. Earthquake debris includes building materials, personal property, and sediment from landslides. The management of this debris, as well as the waste generated during the reconstruction works, can place significant challenges on the national and local capacities. Debris removal is a major component of every post earthquake recovery operation. Much of the debris generated from earthquake is not hazardous. Soil, building material, and green waste, such as trees and shrubs, make up most of the volume of earthquake debris. These wastes not only create significant health problems and a very unpleasant living environment if not disposed of safely and appropriately, but also can subsequently impose economical burdens on the reconstruction phase. In practice, most of the debris may be either disposed of at landfill sites, reused as materials for construction or recycled into useful commodities Therefore, the debris clearance operation should focus on the geotechnical engineering approach as an important post earthquake issue to control the quality of the incoming flow of potential soil materials. In this paper, the importance of an emergency management perspective in this geotechnical approach that takes into account the different criteria related to the operation execution is proposed by highlighting the key issues concerning the handling of the construction

  9. POST Earthquake Debris Management - AN Overview

    NASA Astrophysics Data System (ADS)

    Sarkar, Raju

    Every year natural disasters, such as fires, floods, earthquakes, hurricanes, landslides, tsunami, and tornadoes, challenge various communities of the world. Earthquakes strike with varying degrees of severity and pose both short- and long-term challenges to public service providers. Earthquakes generate shock waves and displace the ground along fault lines. These seismic forces can bring down buildings and bridges in a localized area and damage buildings and other structures in a far wider area. Secondary damage from fires, explosions, and localized flooding from broken water pipes can increase the amount of debris. Earthquake debris includes building materials, personal property, and sediment from landslides. The management of this debris, as well as the waste generated during the reconstruction works, can place significant challenges on the national and local capacities. Debris removal is a major component of every post earthquake recovery operation. Much of the debris generated from earthquake is not hazardous. Soil, building material, and green waste, such as trees and shrubs, make up most of the volume of earthquake debris. These wastes not only create significant health problems and a very unpleasant living environment if not disposed of safely and appropriately, but also can subsequently impose economical burdens on the reconstruction phase. In practice, most of the debris may be either disposed of at landfill sites, reused as materials for construction or recycled into useful commodities Therefore, the debris clearance operation should focus on the geotechnical engineering approach as an important post earthquake issue to control the quality of the incoming flow of potential soil materials. In this paper, the importance of an emergency management perspective in this geotechnical approach that takes into account the different criteria related to the operation execution is proposed by highlighting the key issues concerning the handling of the construction

  10. Practical Applications for Earthquake Scenarios Using ShakeMap

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, B.; Quitoriano, V.; Goltz, J.

    2001-12-01

    estimates that will substantially improve over empirical relations at these frequencies will require developing cost-effective numerical tools for proper theoretical inclusion of known complex ground motion effects. Current efforts underway must continue in order to obtain site, basin, and deeper crustal structure, and to characterize and test 3D earth models (including attenuation and nonlinearity). In contrast, longer period synthetics (>2 sec) are currently being generated in a deterministic fashion to include 3D and shallow site effects, an improvement on empirical estimates alone. As progress is made, we will naturally incorporate such advances into the ShakeMap scenario earthquake and processing methodology. Our scenarios are currently used heavily in emergency response planning and loss estimation. Primary users include city, county, state and federal government agencies (e.g., the California Office of Emergency Services, FEMA, the County of Los Angeles) as well as emergency response planners and managers for utilities, businesses, and other large organizations. We have found the scenarios are also of fundamental interest to many in the media and the general community interested in the nature of the ground shaking likely experienced in past earthquakes as well as effects of rupture on known faults in the future.

  11. Earthquakes: Predicting the unpredictable?

    Hough, Susan E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  12. Earthquakes: hydrogeochemical precursors

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  13. Initiation and runaway process of Tsaoling landslide, triggered by the 1999 Taiwan Chi-Chi earthquake, as studied by high-velocity friction experiments (Invited)

    NASA Astrophysics Data System (ADS)

    Togo, T.; Shimamoto, T.; Dong, J.; Lee, C.

    2013-12-01

    High-velocity friction experiments in the last two decades have demonstrated dramatic weakening of simulated faults at seismic slip rates on the order of 1 m/s (e.g., Di Toro et al., 2011, Nature). Similar experiments revealed very low friction of landslide materials (0.05-0.2 in friction coefficient) that can cause catastrophic landslides with velocity exceeding even 10 m/s (e.g., Miyamoto et al. (2009) on the 1999 Tsaoling landslide in Taiwan; Yano et al. (2009) on the 1999 Jiufengershan landslide in Taiwan,; Ferri et al. (2010, 2011) on the 1963 Vaiont landslide in Italy; Kuo et al. (2011) on the 2009 Hsiaolin landslide in Taiwan). Those studies strongly suggest that there are common processes operative in fault zones and along slip surfaces of catastrophic landslides along bedding planes, fractures or joints. As for catastrophic landslides triggered by an earthquake, an important issue to be addressed is how a landslide initiates during seismic ground motion. Thus we have studied the initiation and runaway process of the Tsaoling landslide by idealizing the initial landslide movement during seismic ground motion as an oscillating accelerating/decelerating motion. Tsaoling landslide is the largest landslide among those triggered by the Chi-Chi earthquake with its volume of about 130 Mm3. The landslide took place along very planar bedding planes of the porous Pliocene sedimentary rocks (mostly siltstone and sandstone), with a dip angle of 14 degree. A seismic record at a station about 500 m away from the landslide and a witness of a survivor who slid on top of the landslide mass indicate that the average speed of the landslide reached 20~40 m/s. A simple analysis of sliding block indicates that the kinetic friction has to be 0.05~0.15 to produce such a high-velocity. Moreover, Tang et al. (2009, Eng. Geol.) analyzed landslide motion with the discrete element method and showed that the landslide mass must have slid nearly as an intact mass, without much

  14. Report on the Aseismic Slip, Tremor, and Earthquakes Workshop

    Gomberg, Joan; Roeloffs, Evelyn; Trehu, Anne; Dragert, Herb; Meertens, Charles

    2008-01-01

    This report summarizes the discussions and information presented during the workshop on Aseismic Slip, Tremor, and Earthquakes. Workshop goals included improving coordination among those involved in conducting research related to these phenomena, assessing the implications for earthquake hazard assessment, and identifying ways to capitalize on the education and outreach opportunities presented by these phenomena. Research activities of focus included making, disseminating, and analyzing relevant measurements; the relationships among tremor, aseismic or 'slow-slip', and earthquakes; and discovering the underlying causative physical processes. More than 52 participants contributed to the workshop, held February 25-28, 2008 in Sidney, British Columbia. The workshop was sponsored by the U.S. Geological Survey, the National Science Foundation?s Earthscope Program and UNAVCO Consortium, and the Geological Survey of Canada. This report has five parts. In the first part, we integrate the information exchanged at the workshop as it relates to advancing our understanding of earthquake generation and hazard. In the second part, we summarize the ideas and concerns discussed in workshop working groups on Opportunities for Education and Outreach, Data and Instrumentation, User and Public Needs, and Research Coordination. The third part presents summaries of the oral presentations. The oral presentations are grouped as they were at the workshop in the categories of phenomenology, underlying physical processes, and implications for earthquake hazards. The fourth part contains the meeting program and the fifth part lists the workshop participants. References noted in parentheses refer to the authors of presentations made at the workshop, and published references are noted in square brackets and listed in the Reference section. Appendix A contains abstracts of all participant presentations and posters, which also have been posted online, along with presentations and author contact

  15. High-frequency seismic signals associated with glacial earthquakes in Greenland

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2017-12-01

    Glacial earthquakes are magnitude 5 seismic events generated by iceberg calving at marine-terminating glaciers. They are characterized by teleseismically detectable signals at 35-150 seconds period that arise from the rotation and capsize of gigaton-sized icebergs (e.g., Ekström et al., 2003; Murray et al., 2015). Questions persist regarding the details of this calving process, including whether there are characteristic precursory events such as ice slumps or pervasive crevasse opening before an iceberg rotates away from the glacier. We investigate the high-frequency seismic signals produced before, during, and after glacial earthquakes. We analyze a set of 94 glacial earthquakes that occurred at three of Greenland's major glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier, from 2001 - 2013. We employ data from the GLISN network of broadband seismometers around Greenland and from short-term seismic deployments located close to the glaciers. These data are bandpass filtered to 3 - 10 Hz and trimmed to one-hour windows surrounding known glacial earthquakes. We observe elevated amplitudes of the 3 - 10 Hz signal for 500 - 1500 seconds spanning the time of each glacial earthquake. These durations are long compared to the 60 second glacial-earthquake source. In the majority of cases we observe an increase in the amplitude of the 3 - 10 Hz signal 200 - 600 seconds before the centroid time of the glacial earthquake and sustained high amplitudes for up to 800 seconds after. In some cases, high-amplitude energy in the 3 - 10 Hz band precedes elevated amplitudes in the 35 - 150 s band by 300 seconds. We explore possible causes for these high-frequency signals, and discuss implications for improving understanding of the glacial-earthquake source.

  16. Earthquake Warning Performance in Vallejo for the South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Wurman, G.; Price, M.

    2014-12-01

    In 2002 and 2003, Seismic Warning Systems, Inc. installed first-generation QuakeGuardTM earthquake warning devices at all eight fire stations in Vallejo, CA. These devices are designed to detect the P-wave of an earthquake and initiate predetermined protective actions if the impending shaking is estimated at approximately Modifed Mercalli Intensity V or greater. At the Vallejo fire stations the devices were set up to sound an audio alert over the public address system and to command the equipment bay doors to open. In August 2014, after more than 11 years of operating in the fire stations with no false alarms, the five units that were still in use triggered correctly on the MW 6.0 South Napa earthquake, less than 16 km away. The audio alert sounded in all five stations, providing fire fighters with 1.5 to 2.5 seconds of warning before the arrival of the S-wave, and the equipment bay doors opened in three of the stations. In one station the doors were disconnected from the QuakeGuard device, and another station lost power before the doors opened completely. These problems highlight just a small portion of the complexity associated with realizing actionable earthquake warnings. The issues experienced in this earthquake have already been addressed in subsequent QuakeGuard product generations, with downstream connection monitoring and backup power for critical systems. The fact that the fire fighters in Vallejo were afforded even two seconds of warning at these epicentral distances results from the design of the QuakeGuard devices, which focuses on rapid false positive rejection and ground motion estimates. We discuss the performance of the ground motion estimation algorithms, with an emphasis on the accuracy and timeliness of the estimates at close epicentral distances.

  17. GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.

    2015-12-01

    The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.

  18. Earthquake Light

    DTIC Science & Technology

    1985-08-15

    movement , piezoelectricity generated by stress release, etc. Lightning strokes of whatever origin can, of course, be expected occasionally to set fires, as...be enhanced by earth movement : the former, by an elevated rate of release of radioactive gases (e.g., Rn 222 ) into the air; and the latter, through...the piezoelectric effect, alteration in telluric currents, etc. Changes in both parameters could be generated over extended periods of time through a

  19. Important Earthquake Engineering Resources

    PEER logo Pacific Earthquake Engineering Research Center home about peer news events research Engineering Resources Site Map Search Important Earthquake Engineering Resources - American Concrete Institute Motion Observation Systems (COSMOS) - Consortium of Universities for Research in Earthquake Engineering

  20. 2016 National Earthquake Conference

    Thank you to our Presenting Sponsor, California Earthquake Authority. What's New? What's Next ? What's Your Role in Building a National Strategy? The National Earthquake Conference (NEC) is a , state government leaders, social science practitioners, U.S. State and Territorial Earthquake Managers

  1. Can We Predict Earthquakes?

    Johnson, Paul

    2018-01-16

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  2. Earthquake and Schools. [Videotape].

    ERIC Educational Resources Information Center

    Federal Emergency Management Agency, Washington, DC.

    Designing schools to make them more earthquake resistant and protect children from the catastrophic collapse of the school building is discussed in this videotape. It reveals that 44 of the 50 U.S. states are vulnerable to earthquake, but most schools are structurally unprepared to take on the stresses that earthquakes exert. The cost to the…

  3. Children's Ideas about Earthquakes

    ERIC Educational Resources Information Center

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  4. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  5. CISN ShakeAlert Earthquake Early Warning System Monitoring Tools

    NASA Astrophysics Data System (ADS)

    Henson, I. H.; Allen, R. M.; Neuhauser, D. S.

    2015-12-01

    CISN ShakeAlert is a prototype earthquake early warning system being developed and tested by the California Integrated Seismic Network. The system has recently been expanded to support redundant data processing and communications. It now runs on six machines at three locations with ten Apache ActiveMQ message brokers linking together 18 waveform processors, 12 event association processes and 4 Decision Module alert processes. The system ingests waveform data from about 500 stations and generates many thousands of triggers per day, from which a small portion produce earthquake alerts. We have developed interactive web browser system-monitoring tools that display near real time state-of-health and performance information. This includes station availability, trigger statistics, communication and alert latencies. Connections to regional earthquake catalogs provide a rapid assessment of the Decision Module hypocenter accuracy. Historical performance can be evaluated, including statistics for hypocenter and origin time accuracy and alert time latencies for different time periods, magnitude ranges and geographic regions. For the ElarmS event associator, individual earthquake processing histories can be examined, including details of the transmission and processing latencies associated with individual P-wave triggers. Individual station trigger and latency statistics are available. Detailed information about the ElarmS trigger association process for both alerted events and rejected events is also available. The Google Web Toolkit and Map API have been used to develop interactive web pages that link tabular and geographic information. Statistical analysis is provided by the R-Statistics System linked to a PostgreSQL database.

  6. Operational earthquake forecasting can enhance earthquake preparedness

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  7. Interactive Visualization to Advance Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn

    2008-04-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.

  8. Interactive visualization to advance earthquake simulation

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  9. Defining "Acceptable Risk" for Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Tucker, B.

    2001-05-01

    The greatest and most rapidly growing earthquake risk for mortality is in developing countries. Further, earthquake risk management actions of the last 50 years have reduced the average lethality of earthquakes in earthquake-threatened industrialized countries. (This is separate from the trend of the increasing fiscal cost of earthquakes there.) Despite these clear trends, every new earthquake in developing countries is described in the media as a "wake up" call, announcing the risk these countries face. GeoHazards International (GHI) works at both the community and the policy levels to try to reduce earthquake risk. GHI reduces death and injury by helping vulnerable communities recognize their risk and the methods to manage it, by raising awareness of its risk, building local institutions to manage that risk, and strengthening schools to protect and train the community's future generations. At the policy level, GHI, in collaboration with research partners, is examining whether "acceptance" of these large risks by people in these countries and by international aid and development organizations explains the lack of activity in reducing these risks. The goal of this pilot project - The Global Earthquake Safety Initiative (GESI) - is to develop and evaluate a means of measuring the risk and the effectiveness of risk mitigation actions in the world's largest, most vulnerable cities: in short, to develop an earthquake risk index. One application of this index is to compare the risk and the risk mitigation effort of "comparable" cities. By this means, Lima, for example, can compare the risk of its citizens dying due to earthquakes with the risk of citizens in Santiago and Guayaquil. The authorities of Delhi and Islamabad can compare the relative risk from earthquakes of their school children. This index can be used to measure the effectiveness of alternate mitigation projects, to set goals for mitigation projects, and to plot progress meeting those goals. The preliminary

  10. Infrasound Signal Characteristics from Small Earthquakes

    DTIC Science & Technology

    2011-09-01

    INFRASOUND SIGNAL CHARACTERISTICS FROM SMALL EARTHQUAKES Stephen J. Arrowsmith1, J. Mark Hale2, Relu Burlacu2, Kristine L. Pankow2, Brian W. Stump3...ABSTRACT Physical insight into source properties that contribute to the generation of infrasound signals is critical to understanding the...m, with one element being co-located with a seismic station. One of the goals of this project is the recording of infrasound from earthquakes of

  11. How fault geometry controls earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  12. New ideas about the physics of earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Klein, William

    1995-07-01

    It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.

  13. Rupture process of a multiple main shock sequence: analysis of teleseismic, local and field observations of the Tennant Creek, Australia, earthquakes of January 22, 1988

    Choy, G.L.; Bowman, J.R.

    1990-01-01

    On January 22, 1988, three large intraplate earthquakes (with MS 6.3, 6.4 and 6.7) occurred within a 12-hour period near Tennant Creek, Australia. Broadband displacement and velocity records of body waves from teleseismically recorded data are analyzed to determine source mechanisms, depths, and complexity of rupture of each of the three main shocks. Hypocenters of an additional 150 foreshocks and aftershocks constrained by local arrival time data and field observations of surface rupture are used to complement the source characteristics of the main shocks. The interpretation of the combined data sets suggests that the overall rupture process involved unusually complicated stress release. Rupture characteristics suggest that substantial slow slip occurred on each of the three fault interfaces that was not accompanied by major energy release. Variation of focal depth and the strong increase of moment and radiated energy with each main shock imply that lateral variations of strength were more important than vertical gradients of shear stress in controlling the progression of rupture. -from Authors

  14. Earthquake activity along the Himalayan orogenic belt

    NASA Astrophysics Data System (ADS)

    Bai, L.; Mori, J. J.

    2017-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  15. Statistical tests of simple earthquake cycle models

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  16. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  17. Crowdsourced earthquake early warning

    PubMed Central

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, an