Science.gov

Sample records for large scale earthquakes

  1. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. A geometric frequency-magnitude scaling transition: Measuring b = 1.5 for large earthquakes

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Holliday, James R.; Turcotte, Donald L.; Rundle, John B.

    2012-04-01

    We identify two distinct scaling regimes in the frequency-magnitude distribution of global earthquakes. Specifically, we measure the scaling exponent b = 1.0 for "small" earthquakes with 5.5 < m < 7.6 and b = 1.5 for "large" earthquakes with 7.6 < m < 9.0. This transition at mt = 7.6, can be explained by geometric constraints on the rupture. In conjunction with supporting literature, this corroborates theories in favor of fully self-similar and magnitude independent earthquake physics. We also show that the scaling behavior and abrupt transition between the scaling regimes imply that earthquake ruptures have compact shapes and smooth rupture-fronts.

  4. Reduce seismic design conservatism through large-scale earthquake experiments

    SciTech Connect

    Tang, H.T.; Stepp, J.C. )

    1992-01-01

    For structures founded on soil deposits, the interaction between the soil and the structure caused by incident seismic waves modifies the foundation input motion and the dynamic characteristics of the soil-structure system. This paper reports that as a result, soil-structure interaction (SSI) plays a critical role in the design of nuclear plant structures. Recognizing that experimental validation and quantification is required, two scaled cylindrical reinforced-concrete containment models (1/4-scale and 1/12-scale of typical full-scale reactor containments) were constructed in Lotung, an active seismic region in Taiwan. Forced vibration tests (FBT) were also conducted to characterize the dynamic behavior of the soil-structure system. Based on these data, a series of round-robin blind prediction and post-test correlation analyses using various currently-available SSI methods were performed.

  5. Earthquake triggering and large-scale geologic storage of carbon dioxide

    PubMed Central

    Zoback, Mark D.; Gorelick, Steven M.

    2012-01-01

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO2 emissions associated with coal-based electrical power generation and other industrial sources of CO2 [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185–5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO2 into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO2 repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  6. Earthquake triggering and large-scale geologic storage of carbon dioxide.

    PubMed

    Zoback, Mark D; Gorelick, Steven M

    2012-06-26

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO(2) emissions associated with coal-based electrical power generation and other industrial sources of CO(2) [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185-5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO(2) into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO(2) repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  7. Large scale dynamic rupture scenario of the 2004 Sumatra-Andaman megathrust earthquake

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Madden, Elizabeth H.; Wollherr, Stephanie; Gabriel, Alice A.

    2016-04-01

    The Great Sumatra-Andaman earthquake of 26 December 2004 is one of the strongest and most devastating earthquakes in recent history. Most of the damage and the ~230,000 fatalities were caused by the tsunami generated by the Mw 9.1-9.3 event. Various finite-source models of the earthquake have been proposed, but poor near-field observational coverage has led to distinct differences in source characterization. Even the fault dip angle and depth extent are subject to debate. We present a physically realistic dynamic rupture scenario of the earthquake using state-of-the-art numerical methods and seismotectonic data. Due to the lack of near-field observations, our setup is constrained by the overall characteristics of the rupture, including the magnitude, propagation speed, and extent along strike. In addition, we incorporate the detailed geometry of the subducting fault using Slab1.0 to the south and aftershock locations to the north, combined with high-resolution topography and bathymetry data.The possibility of inhomogeneous background stress, resulting from the curved shape of the slab along strike and the large fault dimensions, is discussed. The possible activation of thrust faults splaying off the megathrust in the vicinity of the hypocenter is also investigated. Dynamic simulation of this 1300 to 1500 km rupture is a computational and geophysical challenge. In addition to capturing the large-scale rupture, the simulation must resolve the process zone at the rupture tip, whose characteristic length is comparable to smaller earthquakes and which shrinks with propagation distance. Thus, the fault must be finely discretised. Moreover, previously published inversions agree on a rupture duration of ~8 to 10 minutes, suggesting an overall slow rupture speed. Hence, both long temporal scales and large spatial dimensions must be captured. We use SeisSol, a software package based on an ADER-DG scheme solving the spontaneous dynamic earthquake rupture problem with high

  8. DYNAMIC BEHAVIOR OF CONCRETE GRAVITY DAM ON JOINTED ROCK FOUNDATION DURING LARGE-SCALE EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Kimata, Hiroyuki; Fujita, Yutaka; Horii, Hideyuki; Yazdani, Mahmoud

    Dynamic cracking analysis of concrete gravity dam has been carried out during large-scale earthquake, considering the progressive failure of jointed rock foundation. Firstly, in order to take into account the progressive failure of rock foundation, the constitutive law of jointed rock is assumed and its validity is evaluated by simulation analysis based on the past experimental model. Finally, dynamic cracking analysis of 100-m high dam model is performed, using the previously proposed approach with tangent stiffness-proportional damping to express the propagation behavior of crack and the constitutive law of jointed rock. The crack propagation behavior of dam body and the progressive failure of jointed rock foundation are investigated.

  9. Optimization and Scalability of an Large-scale Earthquake Simulation Application

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Olsen, K. B.; Hu, Y.; Day, S.; Dalguer, L. A.; Minster, B.; Moore, R.; Zhu, J.; Maechling, P.; Jordan, T.

    2006-12-01

    In 2004, the Southern California Earthquake Center (SCEC) initiated a major large-scale earthquake simulation, called TeraShake. The TeraShake propagated seismic waves across a domain of 600 km by 300 km by 80 km at 200 meter resolution and 1.8 billion grid points, some of the largest and most detailed earthquake simulations of the southern San Andres fault. The TeraShake 1 code is based on a 4th order FD Anelastic Wave Propagation Model (AWM), developed by K. Olsen, using a kinematic source description. The enhanced TeraShake 2 then added a new physics-based dynamic component, with the new capability to very- large scale earthquake simulations. A high 100 m resolution was used to generate a physically realistic earthquake source description for the San Andreas fault. The executions of very-large scale TeraShake 2 simulations with the high-resolution dynamic source used up to 1024 processors on the TeraGrid, adding more than 60 TB of simulation output in the 168 TB SCEC digital library, managed by the SDSC Storage Resource Broker (SRB) at SDSC. The execution of these large simulations requires high levels of expertise and resource coordination. We examine the lessons learned in enabling the execution of the TeraShake application. In particular, we look at challenges imposed for the single-processor optimization of the application performance, optimization of the I/O handling and optimization of the run initialization, and the execution of the data-intensive simulations. The TeraShake code was optimized to improve scalability to 2048 processors, with a parallel efficiency of 84%. Our latest TeraShake simulation sustains 1 Teraflop/s performance, completing a simulation in less than 9 hours on the SDSC Datastar. This is more than 10 times faster than previous TeraShake simulations. Some of the TeraShake production simulations were carried out using grid computing resources, including the execution on NCSA TeraGrid resources, and run-time archiving outputs onto SDSC

  10. Using Speculative Execution to Reduce Communication in a Parallel Large Scale Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Heien, E. M.; Yikilmaz, M. B.; Sachs, M. K.; Rundle, J. B.; Turcotte, D. L.; Kellogg, L. H.

    2011-12-01

    Earthquake simulations on parallel systems can be communication intensive due to local events (rupture waves) which have global effects (stress transfer). These events require global communication to transmit the effects of increased stress to model elements on other computing nodes. We describe a method of using speculative execution in a large scale parallel computation to decrease communication and improve simulation speed. This method exploits the tendency of earthquake ruptures to remain physically localized even though their effects on stress will be over long ranges. In this method we assume the stress transfer caused by a rupture remains localized and avoid global communication until the rupture has a high probability of passing to another node. We then calculate the stress state of the system to ensure that the rupture in fact remained localized, proceeding if the assumption was correct or rolling back the calculation otherwise. Using this method we are able to reduce communication frequency by 78% percent, in turn decreasing communication time by up to 66% and improving simulation speed by up to 45%.

  11. Earthquake Source Simulations: A Coupled Numerical Method and Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Ely, G. P.; Xin, Q.; Faerman, M.; Day, S.; Minster, B.; Kremenek, G.; Moore, R.

    2003-12-01

    We investigate a scheme for interfacing Finite-Difference (FD) and Finite-Element (FE) models in order to simulate dynamic earthquake rupture. The more powerful but slower FE method allows for (1) unusual geometries (e.g. dipping and curved faults), (2) nonlinear physics, and (3) finite displacements. These capabilities are computationally expensive and limit the useful size of the problem that can be solved. Large efficiencies are gained by employing FE only where necessary in the near source region and coupling this with an efficient FD solution for the surrounding medium. Coupling is achieved through setting up and an overlapping buffer zone between the domains modeled by the two methods. The buffer zone is handled numerically as a set of mutual offset boundary conditions. This scheme eliminates the effect of the artificial boundaries at the interface and allows energy to propagate in both directions across the boundary. In general it is necessary to interpolate variables between the meshes and time discretizations used for each model, and this can create artifacts that must be controlled. A modular approach has been used in which either of the two component codes can be substituted with another code. We have successfully demonstrated coupling for a simulation between a second-order FD rupture dynamics code and fourth-order staggered-grid FD code. To be useful earthquake source models must capture a large range of length and time scales, which is very computationally demanding. This requires that (for current computer technology) codes must utilize parallel processing. Additionally, if larges quantities of output data are to be saved, a high performance data management system is desirable. We show results from a large scale rupture dynamics simulation designed to test these capabilities. We use second-order FD with dimensions of 400 x 800 x 800 nodes, run for 3000 time steps. Data were saved for the entire volume for three components of velocity at every time

  12. Reconsidering earthquake scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Wech, A.; Creager, K.; Obara, K.; Agnew, D.

    2016-06-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  13. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.; Ruppert, S.

    2002-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by analyzing aftershock sequences in the Western U.S. and Turkey using two different techniques. First we examine the observed regional S-wave spectra by fitting with a parametric model (Walter and Taylor, 2002) with and without variable stress drop scaling. Because the aftershock sequences have common stations and paths we can examine the S-wave spectra of events by size to determine what type of apparent stress scaling, if any, is most consistent with the data. Second we use regional coda envelope techniques (e.g. Mayeda and Walter, 1996; Mayeda et al, 2002) on the same events to directly measure energy and moment. The coda techniques corrects for path and site effects using an empirical Green function technique and independent calibration with surface wave derived moments. Our hope is that by carefully analyzing a very large number of events in a consistent manner using two different techniques we can start to resolve this apparent stress scaling issue. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  14. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  15. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Mayeda, K.; Walter, W. R.

    2003-04-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by applying the same methodology to a series of datasets that spans roughly 10 orders in seismic moment, M0. We will summarize recent results using a coda envelope methodology of Mayeda et al, (2003) which provide the most stable source spectral estimates to date. This methodology eliminates the complicating effects of lateral path heterogeneity, source radiation pattern, directivity, and site response (e.g., amplification, f-max and kappa). We find that in tectonically active continental crustal areas the total radiated energy scales as M00.25 whereas in regions of relatively younger oceanic crust, the stress drop is generally lower and exhibits a 1-to-1 scaling with moment. In addition to answering a fundamental question in earthquake source dynamics, this study addresses how one would scale small earthquakes in a particular region up to a future, more damaging earthquake. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  16. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  17. Unified scaling law for earthquakes

    PubMed Central

    Christensen, Kim; Danon, Leon; Scanlon, Tim; Bak, Per

    2002-01-01

    We propose and verify a unified scaling law that provides a framework for viewing the probability of the occurrence of earthquakes in a given region and for a given cutoff magnitude. The law shows that earthquakes occur in hierarchical correlated clusters, which overlap with other spatially separated correlated clusters for large enough time periods and areas. For a small enough region and time-scale, only a single correlated group can be sampled. The law links together the Gutenberg–Richter Law, the Omori Law of aftershocks, and the fractal dimensions of the faults. The Omori Law is shown to be the short time limit of general hierarchical phenomenon containing the statistics of both “main shocks” and “aftershocks,” indicating that they are created by the same mechanism. PMID:11875203

  18. Simulating Large-Scale Earthquake Dynamic Rupture Scenarios On Natural Fault Zones Using the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2014-05-01

    In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.

  19. Identification of elastic basin properties by large-scale inverse earthquake wave propagation

    NASA Astrophysics Data System (ADS)

    Epanomeritakis, Ioannis K.

    The importance of the study of earthquake response, from a social and economical standpoint, is a major motivation for the current study. The severe uncertainties involved in the analysis of elastic wave propagation in the interior of the earth increase the difficulty in estimating earthquake impact in seismically active areas. The need for recovery of information about the geological and mechanical properties of underlying soils motivates the attempt to apply inverse analysis on earthquake wave propagation problems. Inversion for elastic properties of soils is formulated as an constrained optimization problem. A series of trial mechanical soil models is tested against a limited-size set of dynamic response measurements, given partial knowledge of the target model and complete information on source characteristics, both temporal and geometric. This inverse analysis gives rise to a powerful method for recovery of a material model that produces the given response. The goal of the current study is the development of a robust and efficient computational inversion methodology for material model identification. Solution methods for gradient-based local optimization combine with robustification and globalization techniques to build an effective inversion framework. A Newton-based approach deals with the complications of the highly nonlinear systems generated in the inversion solution process. Moreover, a key addition to the inversion methodology is the application of regularization techniques for obtaining admissible soil models. Most importantly, the development and use of a multiscale strategy offers globalizing and robustifying advantages to the inversion process. In this study, a collection of results of inversion for different three-dimensional Lame moduli models is presented. The results demonstrate the effectiveness of the inversion methodology proposed and provide evidence for its capabilities. They also show the path for further study of elastic property

  20. From M8 to CyberShake: Using Large-Scale Numerical Simulations to Forecast Earthquake Ground Motions (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Cui, Y.; Olsen, K. B.; Graves, R. W.; Maechling, P. J.; Day, S. M.; Callaghan, S.; Milner, K.; Scec/Cme Collaboration

    2010-12-01

    Large earthquakes cannot be reliably and skillfully predicted in terms of their location, time, and magnitude. However, numerical simulations of seismic radiation from complex fault ruptures and wave propagation through 3D crustal structures have now advanced to the point where they can usefully predict the strong ground motions from anticipated earthquake sources. We describe a set of four computational pathways employed by the Southern California Earthquake Center (SCEC) to execute and validate these simulations. The methods are illustrated using the largest earthquakes anticipated on the southern San Andreas fault system. A dramatic example is the recent M8 dynamic-rupture simulation by Y. Cui, K. Olsen et al. (2010) of a magnitude-8 “wall-to-wall” earthquake on southern San Andreas fault, calculated to seismic frequencies of 2-Hz on a computational grid of 436 billion elements. M8 is the most ambitious earthquake simulation completed to date; the run took 24 hours on 223K cores of the NCCS Jaguar supercomputer, sustaining 220 teraflops. High-performance simulation capabilities have been implemented by SCEC in the CyberShake hazard model for the Los Angeles region. CyberShake computes over 400,000 earthquake simulations, managed through a scientific workflow system, to represent the probabilistic seismic hazard at a particular site up to seismic frequencies of 0.3 Hz. CyberShake shows substantial differences with conventional probabilistic seismic hazard analysis based on empirical ground-motion prediction. At the probability levels appropriate for long-term forecasting, these differences are most significant (and worrisome) in sedimentary basins, where the population is densest and the regional seismic risk is concentrated. The higher basin amplification obtained by CyberShake is due to the strong coupling between rupture directivity and basin-mode excitation. The simulations show that this coupling is enhanced by the tectonic branching structure of the San

  1. Anthropogenic Triggering of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor ``foreshocks'', since the induction may occur with a delay up to several years.

  2. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years. PMID:25156190

  3. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  4. The Magnitude and Energy of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2003-12-01

    Several magnitudes were introduced to quantify large earthquakes better and more comprehensive than Ms: Mw (moment magnitude; Kanamori, 1977), ME (strain energy magnitude; Purcaru and Berckhemer, 1978), Mt (tsunami magnitude; Abe, 1979), Mm (mantle magnitude; Okal and Talandier, 1985), Me (seismic energy magnitude; Choy and Boatwright, 1995). Although these magnitudes are still subject to different uncertainties, various kinds of earthquakes can now be better understood in terms or combinations of them. They can also be viewd as mappings of basic source parameters: seismic moment, strain energy, seismic energy, stress drop, under certain assumptions or constraints. We studied a set of about 90 large earthquakes (shallow and deeper) occurred in different tectonic regimes, with more reliable source parameters, and compared them in terms of the above magnitudes. We found large differences between the strain energy (mapped to ME) and seismic energy (mapped to Me), and between ME of events with about the same Mw. This confirms that no 1-to-1 correspondence exists between these magnitudes (Purcaru, 2002). One major cause of differences for "normal" earthquakes is the level of the stress drop over asperities which release and partition the strain energy. We quantify the energetic balance of earthquakes in terms of strain energy Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) using an extended Hamilton's principle. The earthquakes are thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental. The (scaled) strain energy equation we derived is: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, assuming complete stress drop, using the (static) stress drop variability, and that Est and Es are not in a 1-to-1 correspondence. With all uncertainties, our analysis reveal, for a given seismic moment, a large variation of earthquakes in terms of energies, even in the same seismic region. In view of these, for further understanding

  5. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  6. Large Rock Slope Failures Induced by Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Aydan, Ö.

    2016-06-01

    Recent earthquakes caused many large-scale rock slope failures. The scale and impact of rock slope failures are very large, and the form of failure differs depending upon the geological structures of slopes. First, the author briefly describes some model experiments to investigate the effects of shaking or faulting due to earthquakes on rock slopes. Then, fundamental characteristics of the rock slope failures induced by the earthquakes are described and evaluated according to some empirical and theoretical models. Furthermore, the observations for slope failures in relation to earthquake magnitude and epicenter or hypocenter distance were compared with several empirical relations available in the literature. Some of major rock slope failures induced by earthquakes are selected, and the post-failure motions are simulated and compared with observations. In addition, the effects of tsunamis on rock slopes in view of observations in the reconnaissances of the recent mega-earthquakes are explained and are discussed.

  7. Investigation of Large Earthquakes as Critical Phase Transitions

    NASA Astrophysics Data System (ADS)

    Gonzalez-Huizar, H.; Mariani, M. C.; Serpa, L. F.; Beccar-Varela, M. P.; Tweneboah, O. K.

    2015-12-01

    In this work we present some of our results from investigating earthquakes sequences, which include very large earthquakes, using different stochastic and deterministic critical phenomena models. With the objective to estimate magnitude and origin time of large earthquakes based on the preceding seismicity, we investigate the use of several modeling techniques, including: The Levy flight, Scale-Invariant functions, and the Ising models. We also developed a stochastic differential equation arising on the superposition of independent Ornstein-Uhlenbeck processes driven by a Gamma (a,b) process. Here we summarize some of the results of applying these techniques for modeling earthquakes sequences in different tectonic regions.

  8. Patterns of seismic activity preceding large earthquakes

    NASA Technical Reports Server (NTRS)

    Shaw, Bruce E.; Carlson, J. M.; Langer, J. S.

    1992-01-01

    A mechanical model of seismic faults is employed to investigate the seismic activities that occur prior to major events. The block-and-spring model dynamically generates a statistical distribution of smaller slipping events that precede large events, and the results satisfy the Gutenberg-Richter law. The scaling behavior during a loading cycle suggests small but systematic variations in space and time with maximum activity acceleration near the future epicenter. Activity patterns inferred from data on seismicity in California demonstrate a regional aspect; increased activity in certain areas are found to precede major earthquake events. One example is given regarding the Loma Prieta earthquake of 1989 which is located near a fault section associated with increased activity levels.

  9. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  10. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  11. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  12. Afterslip and viscoelastic relaxation model inferred from the large scale postseismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-03-01

    Megathrust earthquakes of magnitude close to 9 are followed by large scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5-years time span after the 2010 Mw8.8 Maule Megathrust Earthquake (February 27, 2010) over the whole South American continent. With the first two years of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a Low Viscosity Channel along the deepest part of the plate interface and no additional Low Viscosity Wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa.s; and (ii) a Low Viscosity Channel along the plate interface extending from depths of 55 to 135 km with viscosities below 1018 Pa.s.

  13. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  14. Multidimensional scaling visualization of earthquake phenomena

    NASA Astrophysics Data System (ADS)

    Lopes, António M.; Machado, J. A. Tenreiro; Pinto, C. M. A.; Galhano, A. M. S. F.

    2014-01-01

    Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn-Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.

  15. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  16. Triggering of volcanic activity by large earthquakes

    NASA Astrophysics Data System (ADS)

    Avouris, D.; Carn, S. A.; Waite, G. P.

    2011-12-01

    Statistical analysis of temporal relationships between large earthquakes and volcanic eruptions suggests seismic waves may trigger eruptions even over great distances, although the causative mechanism is not well constrained. In this study the relationship between large earthquakes and subtle changes in volcanic activity was investigated in order to gain greater insight into the relationship between dynamic stress and volcanic response. Daily measurements from the Ozone Monitoring Instrument (OMI), onboard the Aura satellite, provide constraints on volcanic sulfur dioxide (SO2) emission rates as a measure of subtle changes in activity. An SO2 timeseries was produced from OMI data for thirteen persistently active volcanoes. Seismic surface-wave amplitudes were modeled from the source mechanisms of moment magnitude (Mw) ≥7 earthquakes, and peak dynamic stress (PDS) was calculated. The SO2 timeseries for each volcano was used to calculate a baseline threshold for comparison with post-earthquake emission. Delay times for an SO2 response following each earthquake at each volcano were analyzed and compared to a random catalog. The delay time analysis was inconclusive. However, an analysis based on the occurrence of large earthquakes showed a response at most volcanoes. Using the PDS calculations as a filtering criterion for the earthquake catalog, the SO2 mass for each volcano was analyzed in 28-day windows centered on the earthquake origin time. If the average SO2 mass after the earthquake was greater than an arbitrary percentage of pre-earthquake mass, we identified the volcano as having a response to the event. This window analysis provided insight on what type of volcanic activity is more susceptible to triggering by dynamic stress. The volcanoes with lava lakes included in this study, Ambrym, Gaua, Villarrica, and Erta Ale, showed a clear response to dynamic stress while the volcanoes with lava domes, Merapi, Semeru, and Bagana showed no response at all. Perhaps

  17. Afterslip and Viscoelastic Relaxation Model Inferred from the Large Scale Postseismic Deformation Following the 2010 Mw 8,8 Maule Earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Vigny, C.; Klein, E.; Fleitout, L.; Garaud, J. D.

    2015-12-01

    Postseismic deformation following the large subduction earthquake of Maule (Chile, Mw8.8, February 27th 2010) have been closely monitored with GPS from 70 km up to 2000 km away from the trench. They exhibit a behavior generally similar to that already observed after the Aceh and Tohoku-Oki earthquakes. Vertical uplift is observed on the volcanic arc and a moderate large scale subsidence is associated with sizeable horizontal deformation in the far-field (500-2000km from the trench). In addition, near-field data (70-200km from the trench) feature a rather complex deformation pattern. A 3D FE code (Zebulon Zset) is used to relate these deformation to slip on the plate interface and relaxation in the mantle. The mesh features a spherical shell-portion from the core-mantle boundary to the Earth's surface, extending over more than 60 degrees in latitude and longitude. The overridding and subducting plates are elastic, and the asthenosphere is viscoelastic. A viscoelastic Low Viscosity Channel (LVC) is also introduced along the plate interface. Both the asthenosphere and the channel feature Burger's rheologies and we invert for their mechanical properties and geometrical characteristics simultaneously with the afterslip distribution. The horizontal deformation pattern requires relaxation both in i) the asthenosphere extending down to 270km, with a 'long-term' viscosity of the order of 4.8.1018 Pa.s and ii) in the channel, that has to extend from depth of 50 to 150 km with viscosities slightly below 1018 Pa.s, to fit well the vertical velocity pattern (intense and quick uplift over the Cordillera). Aseismic slip on the plate interface, at shallow depth, is necessary to explain all the characteristics of the near-field displacements. We then detect two main patches of high slip, one updip of the coseismic slip distribution in the northernmost part of the rupture zone, and the other one downdip, at the latitude of Constitucion (35°S). We finally study the temporel

  18. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  19. Increased correlation range of seismicity before large events manifested by earthquake chains

    NASA Astrophysics Data System (ADS)

    Shebalin, P.

    2006-10-01

    "Earthquake chains" are clusters of moderate-size earthquakes which extend over large distances and are formed by statistically rare pairs of events that are close in space and time ("neighbors"). Earthquake chains are supposed to be precursors of large earthquakes with lead times of a few months. Here we substantiate this hypothesis by mass testing it using a random earthquake catalog. Also, we study stability under variation of parameters and some properties of the chains. We found two invariant parameters: they characterize the spatial and energy scales of earthquake correlation. Both parameters of the chains show good correlation with the magnitudes of the earthquakes they precede. Earthquake chains are known as the first stage of the earthquake prediction algorithm reverse tracing of precursors (RTP) now tested in forward prediction. A discussion of the complete RTP algorithm is outside the scope of this paper, but the results presented here are important to substantiate the RTP approach.

  20. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  1. Recurrent slow slip event reveals the interaction with seismic slow earthquakes and disruption from large earthquake

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Moore, Angelyn W.; Owen, Susan

    2015-09-01

    It remains enigmatic how slow slip events (SSEs) interact with other slow seismic events and large distant earthquakes at many subduction zones. Here we model the spatiotemporal slip evolution of the most recent long-term SSE in 2009-2011 in the Bungo Channel region, southwest Japan using GEONET GPS position time-series and a Kalman filter-based, time-dependent slip inversion method. We examine the space-time relationship between the geodetically determined slow slip transient and seismically observed low frequency earthquakes (LFEs) and very-low frequency earthquakes (V-LFEs) near the Nankai trough. We find a strong but distinct temporal correlation between transient slip and LFEs and V-LFEs, suggesting a different relationship to the SSE. We also find the great Tohoku-Oki earthquake appears to disrupt the normal source process of the SSE, probably reflecting large-scale stress redistribution caused by the earthquake. Comparison of the 2009-2011 SSE with others in the same region shows much similarity in slip and moment release, confirming its recurrent nature. Comparison of transient slip with plate coupling shows that slip transients mainly concentrate on the transition zone from strong coupling region to downdip LFEs with transient slip relieving elastic strain accumulation at transitional depth. The less consistent spatial correlation between the long-term SSE and seismic slow earthquakes, and susceptibility of these slow earthquakes to various triggering sources including long-term slow slip, suggests caution in using the seismically determined slow earthquakes as a proxy for slow slip.

  2. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  3. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2015-10-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  4. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  5. An Energy Rate Magnitude for Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Newman, A. V.; Convers, J. A.

    2008-12-01

    The ability to rapidly assess the approximate size of very large and destructive earthquakes is important for early hazard mitigation from both strong shaking and potential tsunami generation. Using a methodology to rapidly determine earthquake energy and duration using teleseismic high-frequency energy, we develop an adaptation to approximate the magnitude of a very large earthquake before the full duration of rupture can be measured at available teleseismic stations. We utilize available vertical component data to analyze the high-frequency energy growth between 0.5 and 2 Hz, minimizing the effect of later arrivals that are mostly attenuated in this range. Because events smaller than M~6.5 occur rapidly, this method is most adequate for larger events, whose rupture duration exceeds ~20 seconds. Using a catalog of about 200 large and great earthquakes we compare the high-frequency energy rate (· Ehf) to the total broad- band energy (· Ebb) to find a relationship for: Log(· Ehf)/Log(Ebb)≍ 0.85. Hence, combining this relation to the broad-band energy magnitude (Me) [Choy and Boatwright, 1995], yields a new high-frequency energy rate magnitude: M· E=⅔ log10(· Ehf)/0.85-2.9. Such an empirical approach can thus be used to obtain a reasonable assessment of an event magnitude from the initial estimate of energy growth, even before the arrival of the full direct-P rupture signal. For large shallow events thus far examined, the M· E predicts the ultimate Me to within ±0.2 units of M. For fast rupturing deep earthquakes M· E overpredicts, while for slow-rupturing tsunami earthquakes M· E underpredicts Me likely due to material strength changes at the source rupture. We will report on the utility of this method in both research mode, and in real-time scenarios when data availability is limited. Because the high-frequency energy is clearly discernable in real-time, this result suggests that the growth of energy can be used as a good initial indicator of the

  6. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  7. Mw Dependence of Ionospheric Electron Enhancement Immediately Before Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2015-12-01

    Ionospheric electrons were reported to have increased ~40 minutes before the 2011 Tohoku-oki (Mw9.0) earthquake, Japan, by observing total electron content (TEC) with GNSS receivers [e.g. Heki and Enomoto, 2013]. They further demonstrated that similar TEC enhancements preceded all the recent earthquakes with Mw of 8.5 or more. Their reality has been repeatedly questioned due mainly to the ambiguity in the derivation of the reference TEC curves from which anomalies are defined [e.g. Masci et al., 2015]. Here we propose a numerical approach, based on Akaike's Information Criterion, to detect positive breaks (sudden increase of TEC rate) in the vertical TEC time series without using reference curves. We demonstrate that such breaks are detected 20-80 minutes before the ten recent large earthquakes with Mw7.8-9.2. The amounts of breaks were found to depend on the background absolute VTEC and Mw, i.e. Break (TECU/h)=4.74Mw+0.13VTEC-39.86, with the standard deviation of ~1.2 TECU/h. We can convert this equation to Mw = (Break-0.13VTEC+39.86)/4.74, which can tell us the Mw of impending earthquakes with uncertainty of ~0.25. The precursor times were longer for larger earthquakes, ranging from ~80 minutes for the largest (2004 Sumatra-Andaman) to ~21 minutes for the smallest (2015 Nepal). The precursors of intraplate earthquakes (e.g. 2012 Indian Ocean) started significantly earlier than interplate ones. We performed the same analyses during periods without earthquakes, and found that positive breaks comparable to that before the 2011 Tohoku-oki earthquake occur once in 20 hours. They originate from small amplitude Large-scale Travelling Ionospheric Disturbances (LSTID), which are excited in the auroral oval and move southward with the velocity of internal gravity waves. This probability is small enough to rule out the fortuity of these breaks, but large enough to make it a challenge to apply preseismic TEC enhancements for short-term earthquake prediction.

  8. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  9. Rapid Characterization of Large Earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Barrientos, S. E.; Team, C.

    2015-12-01

    Chile, along 3000 km of it 4200 km long coast, is regularly affected by very large earthquakes (up to magnitude 9.5) resulting from the convergence and subduction of the Nazca plate beneath the South American plate. These megathrust earthquakes exhibit long rupture regions reaching several hundreds of km with fault displacements of several tens of meters. Minimum delay characterization of these giant events to establish their rupture extent and slip distribution is of the utmost importance for rapid estimations of the shaking area and their corresponding tsunami-genic potential evaluation, particularly when there are only few minutes to warn the coastal population for immediate actions. The task of a rapid evaluation of large earthquakes is accomplished in Chile through a network of sensors being implemented by the National Seismological Center of the University of Chile. The network is mainly composed approximately by one hundred broad-band and strong motion instruments and 130 GNSS devices; all will be connected in real time. Forty units present an optional RTX capability, where satellite orbits and clock corrections are sent to the field device producing a 1-Hz stream at 4-cm level. Tests are being conducted to stream the real-time raw data to be later processed at the central facility. Hypocentral locations and magnitudes are estimated after few minutes by automatic processing software based on wave arrival; for magnitudes less than 7.0 the rapid estimation works within acceptable bounds. For larger events, we are currently developing automatic detectors and amplitude estimators of displacement coming out from the real time GNSS streams. This software has been tested for several cases showing that, for plate interface events, the minimum magnitude threshold detectability reaches values within 6.2 and 6.5 (1-2 cm coastal displacement), providing an excellent tool for earthquake early characterization from a tsunamigenic perspective.

  10. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  11. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  12. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Razafindrakoto, Hoby N. T.; Mai, P. Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran K. S.

    2015-07-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  13. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  14. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  15. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  16. Large Earthquake Potential in the Southeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Mora-Paez, H.; Bilham, R. G.; Lafemina, P.; Mattioli, G. S.; Molnar, P. H.; Audemard, F. A.; Perez, O. J.

    2015-12-01

    The axis of rotation describing relative motion of the Caribbean plate with respect to South America lies in Canada near Hudson's Bay, such that the Caribbean plate moves nearly due east relative to South America [DeMets et al. 2010]. The plate motion is absorbed largely by pure strike slip motion along the El Pilar Fault in northeastern Venezuela, but in northwestern Venezuela and northeastern Colombia, the relative motion is distributed over a wide zone that extends from offshore to the northeasterly trending Mérida Andes, with the resolved component of convergence between the Caribbean and South American plates estimated at ~10 mm/yr. Recent densification of GPS networks through COLOVEN and COCONet including access to private GPS data maintained by Colombia and Venezuela allowed the development of a new GPS velocity field. The velocity field, processed with JPL's GOA 6.2, JPL non-fiducial final orbit and clock products and VMF tropospheric products, includes over 120 continuous and campaign stations. This new velocity field along with enhanced seismic reflection profiles, and earthquake location analysis strongly suggest the existence of an active oblique subduction zone. We have also been able to use broadband data from Venezuela to search slow-slip events as an indicator of an active subduction zone. There are caveats to this hypothesis, however, including the absence of volcanism that is typically concurrent with active subduction zones and a weak historical record of great earthquakes. A single tsunami deposit dated at 1500 years before present has been identified on the southeast Yucatan peninsula. Our simulations indicate its probable origin is within our study area. We present a new GPS-derived velocity field, which has been used to improve a regional block model [based on Mora and LaFemina, 2009-2012] and discuss the earthquake and tsunami hazards implied by this model. Based on the new geodetic constraints and our updated block model, if part of the

  17. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  18. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  19. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  20. Time-Dependent Earthquake Forecasts on a Global Scale

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Graves, W. R.

    2014-12-01

    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged ("Poisson") forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. An example is the Working Group on California Earthquake Probabilities, which has been using fault-based models to compute conditional probabilities in California since 1988. An example of a forecast is the Epidemic-Type Aftershock Sequence (ETAS), which is based on the Gutenberg-Richter (GR) magnitude-frequency law, the Omori aftershock law, and Poisson statistics. The method discussed in this talk is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as "markers" for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous research, we used conditional Weibull statistics to convert event counts into a temporal probability for a given fixed region. In the present paper, we move belyond a fixed region, and develop a method to compute these Natural Time Weibull (NTW) forecasts on a global scale, using an internally consistent method, in regions of arbitrary shape and size. We develop and implement these methods on a modern web-service computing platform, which can be found at www.openhazards.com and www.quakesim.org. We also discuss constraints on the User Interface (UI) that follow from practical considerations of site usability.

  1. Absence of remotely triggered large earthquakes beyond the mainshock region

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2011-01-01

    Large earthquakes are known to trigger earthquakes elsewhere. Damaging large aftershocks occur close to the mainshock and microearthquakes are triggered by passing seismic waves at significant distances from the mainshock. It is unclear, however, whether bigger, more damaging earthquakes are routinely triggered at distances far from the mainshock, heightening the global seismic hazard after every large earthquake. Here we assemble a catalogue of all possible earthquakes greater than M 5 that might have been triggered by every M 7 or larger mainshock during the past 30 years. We compare the timing of earthquakes greater than M 5 with the temporal and spatial passage of surface waves generated by large earthquakes using a complete worldwide catalogue. Whereas small earthquakes are triggered immediately during the passage of surface waves at all spatial ranges, we find no significant temporal association between surface-wave arrivals and larger earthquakes. We observe a significant increase in the rate of seismic activity at distances confined to within two to three rupture lengths of the mainshock. Thus, we conclude that the regional hazard of larger earthquakes is increased after a mainshock, but the global hazard is not.

  2. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes. PMID:17738277

  3. Evidence for a difference in rupture initiation between small and large earthquakes.

    PubMed

    Colombelli, S; Zollo, A; Festa, G; Picozzi, M

    2014-01-01

    The process of earthquake rupture nucleation and propagation has been investigated through laboratory experiments and theoretical modelling, but a limited number of observations exist at the scale of earthquake fault zones. Distinct models have been proposed, and whether the magnitude can be predicted while the rupture is ongoing represents an unsolved question. Here we show that the evolution of P-wave peak displacement with time is informative regarding the early stage of the rupture process and can be used as a proxy for the final size of the rupture. For the analysed earthquake set, we found a rapid initial increase of the peak displacement for small events and a slower growth for large earthquakes. Our results indicate that earthquakes occurring in a region with a large critical slip distance have a greater likelihood of growing into a large rupture than those originating in a region with a smaller slip-weakening distance. PMID:24887597

  4. Earthquake Scaling and Development of Ground Motion Prediction for Earthquake Hazard Mitigation in Taiwan

    NASA Astrophysics Data System (ADS)

    Ma, K.; Yen, Y.

    2011-12-01

    For earthquake hazard mitigation toward risk management, integration study from development of source model to ground motion prediction is crucial. The simulation for high frequency component ( > 1 Hz) of strong ground motions in the near field was not well resolved due to the insufficient resolution in velocity structure. Using the small events as Green's functions (i.e. empirical Green's function (EGF) method) can resolve the problem of lack of precise velocity structure to replace the path effect evaluation. If the EGF is not available, a stochastic Green's function (SGF) method can be employed. Through characterizing the slip models derived from the waveform inversion, we directly extract the parameters needed for the ground motion prediction in the EGF method or the SGF method. The slip models had been investigated from Taiwan dense strong motion and global teleseismic data. In addition, the low frequency ( < 1 Hz) can obtained numerically by the Frequency-Wavenumber (FK) method. Thus, broadband frequency strong ground motion can be calculated by a hybrid method that combining a deterministic FK method for the low frequency simulation and the EGF or SGF method for high frequency simulation. Characterizing the definitive source parameters from the empirical scaling study can provide directly to the ground motion simulation. To give the ground motion prediction for a scenario earthquake, we compiled the earthquake scaling relationship from the inverted finite-fault models of moderate to large earthquakes in Taiwan. The studies show the significant involvement of the seismogenic depth to the development of rupture width. In addition to that, several earthquakes from blind fault show distinct large stress drop, which yield regional high PGA. According to the developing scaling relationship and the possible high stress drops for earthquake from blind faults, we further deploy the hybrid method mentioned above to give the simulation of the strong motion in

  5. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  6. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  7. The Effect of Damage on Earthquake Scaling and Forecasting

    NASA Astrophysics Data System (ADS)

    Klein, W.; Serino, C.; Tiampo, K. F.; Rundle, J. B.

    2010-12-01

    We modify simple models of earthquake faults that have Gutenburg-Richter scaling associated with a critical point to include damage. We find that increasing the amount of damage drives the system away from the critical point and decreases the region of scaling for an individual fault. However, the scaling of the collection of faults with a range of damage extends over a large moment range with an exponent different than that of individual faults without damage. In addition the data indicates that in fault models with large amounts of damage accelerated moment release(AMR) is a reliable indicator of a catastrophic event. In models with little or no damage, however, using AMR as an indicator will result in a large number of false positives.

  8. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  9. Global coseismic deformations, GNSS time series analysis, and earthquake scaling laws

    NASA Astrophysics Data System (ADS)

    Métivier, Laurent; Collilieux, Xavier; Lercier, Daphné; Altamimi, Zuheir; Beauducel, François

    2014-12-01

    We investigate how two decades of coseismic deformations affect time series of GPS station coordinates (Global Navigation Satellite System) and what constraints geodetic observations give on earthquake scaling laws. We developed a simple but rapid model for coseismic deformations, assuming different earthquake scaling relations, that we systematically applied on earthquakes with magnitude larger than 4. We found that coseismic displacements accumulated during the last two decades can be larger than 10 m locally and that the cumulative displacement is not only due to large earthquakes but also to the accumulation of many small motions induced by smaller earthquakes. Then, investigating a global network of GPS stations, we demonstrate that a systematic global modeling of coseismic deformations helps greatly to detect discontinuities in GPS coordinate time series, which are still today one of the major sources of error in terrestrial reference frame construction (e.g., the International Terrestrial Reference Frame). We show that numerous discontinuities induced by earthquakes are too small to be visually detected because of seasonal variations and GPS noise that disturb their identification. However, not taking these discontinuities into account has a large impact on the station velocity estimation, considering today's precision requirements. Finally, six groups of earthquake scaling laws were tested. Comparisons with our GPS time series analysis on dedicated earthquakes give insights on the consistency of these scaling laws with geodetic observations and Okada coseismic approach.

  10. The foreshock sequence of large earthquakes: slow slip or cascade triggering?

    NASA Astrophysics Data System (ADS)

    Huang, H.; Meng, L.

    2014-12-01

    Large earthquakes such as the 2011 Mw 9.0 Tohoku-Oki earthquake and the 2014 Mw 8.1 Iquique earthquake are often preceded by foreshock sequences migrating toward the hypocenters of mainshocks. Understanding the underlying physical processes is crucial for imminent seismic hazard assessment. Some of these foreshock sequences are accompanied by repeating earthquakes, which are thought to be a manifestation of a large-scale background slow slip transient. The alternative interpretation is that the migrating seismicity is simply produced by the cascade triggering of mainshock-aftershock sequences following Omori's Law. In this case the repeating earthquakes are driven by the afterslip of the moderate to large foreshocks instead of an independent slow slip event. As an initial effort to discriminate these two hypotheses, we made a detailed analysis of the repeating earthquakes among the foreshock sequences of the 2014 Mw 8.1 Iquique earthquake. We observed that some significant foreshocks (M >= 5.5) are followed by the rapid occurrences of local repeaters, suggesting the contribution of afterslip. However the repeaters are distributed in a wide area (~40*80 km), which are difficult to be driven by only a few moderate to large foreshocks. Furthermore, the estimated repeater-inferred aseismic moment during the foreshock period is at least 3.041e19 Nm (5*5 km grid), which is of the same order with the total amount of seismic moment of all foreshocks (2.251e19 Nm). This comparison again supports the slow-slip model since the ratio of post-seismic to coseismic moment is small in most earthquakes. To estimate the contributions of the transient slow slip and cascade triggering in the initiation of large earthquakes, we propose to systematically search and analyze repeating earthquakes in all foreshock sequences preceding large earthquakes. The next effort will be made to the long precursory phase of large interplate earthquakes such as the 1999 Mw 7.6 Izimit earthquake and the

  11. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  12. Detection capability of global earthquakes influenced by large intermediate-depth and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2011-12-01

    This study examined the detection capability of the global CMT catalogue immediately after a large intermediate-depth (70 < depth ≤ 300 km) or deep (300 km < depth) earthquake. Iwata [2008, GJI] have revealed that the detection capability is remarkably lower than ordinary one for several hours after the occurrence of a large shallow (depth ≤ 70 km) earthquake. Since the global CMT catalogue plays an important role in studies on global earthquake forecasting or seismicity pattern [e.g., Kagan and Jackson, 2010, Pageoph], the characteristic of the catalogue should be investigated carefully. We stacked global shallow earthquake sequences, which are taken from the global CMT catalogue from 1977 to 2010, after a large intermediate-depth or deep earthquake. Then, we utilized a statistical model representing an observed magnitude-frequency distribution of earthquakes [e.g., Ringdal, 1975, BSSA; Ogata and Katsura, 1993, GJI]. The applied model is a product of the Gutenberg-Richter law and a detection rate function q(M). Following previous studies, the cumulative distribution of the normal distribution was used as q(M). This model enables us to estimate μ, the magnitude where the detection rate of earthquake is 50 per cent. Finally, a Bayesian approach with a piecewise linear approximation [Iwata, 2008, GJI] was applied to this stacked data to estimate the temporal change of μ. Consequently, we found a significantly lowered detection capability after a intermediate-depth or deep earthquake of which magnitude is 6.5 or larger. The lowered detection capability lasts for several hours or one-half day. During this period of low detection capability, a few per cent of M ≥ 6.0 earthquakes or a few tens percent of M ≥ 5.5 earthquakes are undetected in the global CMT catalogue while the magnitude completeness threshold of the catalogue was estimated to be around 5.5 [e.g., Kagan, 2003, PEPI].

  13. The velocity effects of large historical earthquakes in Chinese mainland

    NASA Astrophysics Data System (ADS)

    Tan, Weijie; Dong, Danan; Wu, Bin

    2016-04-01

    Accompanying with the collision between Indian and Eurasian plates, China has experienced decadal large earthquakes over the past 100 years. These large earthquakes are mainly located along several seismic belts in Tien Shan, Tibet Plateau, and Northern China. The postseismic deformation and stress accumulation induced by the historical earthquakes is important for assess the contemporary seismic hazards. The postseismic deformation induced by historical large earthquakes also influences the observed present day velocity field. The relaxation of the viscoelastic asthenosphere is modeled on a layered spherically symmetric earth with Maxwell rheology. The layer's thickness, the density p and the P-wave velocity Vp are from PREM. The shear modulus are derived from the p and Vp. The viscosity between lower crust and upper mantle adopted in this study is 1×1019 Pa.s. Viscoelastic relaxation contributions due to 34 historical large earthquakes in China from 1900 to 2001 are calculated using VISCO1D-v3 program developed by Pollitz (1997). We calculated the model predicted velocity field in 2015 in China caused by historical big earthquakes. The pattern of predicted velocity field is consistent with the present movement of crust, with peak velocities reaching 6mm yr‑1. The region of Southwestern China moves northeastwards, and also a significant rotation occurred at the edge of the Tibetan Plateau. The velocity field caused by historical large earthquakes provides a base to isolate the velocity field caused by the contemporary tectonic movement from the geodetic observations. It also provides critical information to investigate the regional stress accumulation and to assess the mid-term to long-term earthquake risk.

  14. Deeper penetration of large earthquakes on seismically quiescent faults.

    PubMed

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard. PMID:27284188

  15. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  16. Deeper penetration of large earthquakes on seismically quiescent faults

    NASA Astrophysics Data System (ADS)

    Jiang, Junle; Lapusta, Nadia

    2016-06-01

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  17. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  18. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  19. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide. PMID:22410538

  20. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-07-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  1. 1/f and the Earthquake Problem: Scaling constraints that facilitate operational earthquake forecasting

    NASA Astrophysics Data System (ADS)

    yoder, M. R.; Rundle, J. B.; Turcotte, D. L.

    2012-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard

  2. 1/f and the Earthquake Problem: Scaling constraints to facilitate operational earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Yoder, M. R.; Rundle, J. B.; Glasscoe, M. T.

    2013-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or '1/f', nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this '1/f problem,' it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area), in combination with a metric to quantify rate trends in local seismicity, to the local earthquake magnitude potential - the magnitudes of earthquakes the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.

  3. Large earthquake processes in the northern Vanuatu subduction zone

    NASA Astrophysics Data System (ADS)

    Cleveland, K. Michael; Ammon, Charles J.; Lay, Thorne

    2014-12-01

    The northern Vanuatu (formerly New Hebrides) subduction zone (11°S to 14°S) has experienced large shallow thrust earthquakes with Mw > 7 in 1966 (MS 7.9, 7.3), 1980 (Mw 7.5, 7.7), 1997 (Mw 7.7), 2009 (Mw 7.7, 7.8, 7.4), and 2013 (Mw 8.0). We analyze seismic data from the latter four earthquake sequences to quantify the rupture processes of these large earthquakes. The 7 October 2009 earthquakes occurred in close spatial proximity over about 1 h in the same region as the July 1980 doublet. Both sequences activated widespread seismicity along the northern Vanuatu subduction zone. The focal mechanisms indicate interplate thrusting, but there are differences in waveforms that establish that the events are not exact repeats. With an epicenter near the 1980 and 2009 events, the 1997 earthquake appears to have been a shallow intraslab rupture below the megathrust, with strong southward directivity favoring a steeply dipping plane. Some triggered interplate thrusting events occurred as part of this sequence. The 1966 doublet ruptured north of the 1980 and 2009 events and also produced widespread aftershock activity. The 2013 earthquake rupture propagated southward from the northern corner of the trench with shallow slip that generated a substantial tsunami. The repeated occurrence of large earthquake doublets along the northern Vanuatu subduction zone is remarkable considering the doublets likely involved overlapping, yet different combinations of asperities. The frequent occurrence of large doublet events and rapid aftershock expansion in this region indicate the presence of small, irregularly spaced asperities along the plate interface.

  4. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes - Semi-Periodicity Analysis of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-07-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  5. An earthquake strength scale for the media and the public

    USGS Publications Warehouse

    Johnston, A.C.

    1990-01-01

    A local engineer, E.P Hailey, pointed this problem out to me shortly after the Loma Prieta earthquake. He felt that three problems limited the usefulness of magnitude in describing an earthquake to the public; (1) most people don't understand that it is not a linear scale; (2) of those who do realized the scale is not linear, very few understand the difference of a factor of ten in ground motion and 32 in energy release between points on the scale; and (3) even those who understand the first two points have trouble putting a given magnitude value into terms they can relate to. In summary, Mr. Hailey wondered why seismologists can't come up with an earthquake scale that doesn't confuse everyone and that conveys a sense of true relative size. Here, then, is m attempt to construct such a scale

  6. Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults

    USGS Publications Warehouse

    Harris, R.A.

    2004-01-01

    Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.

  7. Large intermediate-depth earthquakes and the subduction process

    NASA Astrophysics Data System (ADS)

    Astiz, Luciana; Lay, Thorne; Kanamori, Hiroo

    1988-12-01

    This study provides an overview of intermediate-depth earthquake phenomena, placing emphasis on the larger, tectonically significant events, and exploring the relation of intermediate-depth earthquakes to shallower seismicity. Especially, we examine whether intermediate-depth events reflect the state of interplate coupling at subduction zones, and whether this activity exhibits temporal changes associated with the occurrence of large underthrusting earthquakes. Historic record of large intraplate earthquakes ( mB ≥ 7.0) in this century shows that the New Hebrides and Tonga subduction zones have the largest number of large intraplate events. Regions associated with bends in the subducted lithosphere also have many large events (e.g. Altiplano and New Ireland). We compiled a catalog of focal mechanisms for events that occurred between 1960 and 1984 with M > 6 and depth between 40 and 200 km. The final catalog includes 335 events with 47 new focal mechanisms, and is probably complete for earthquakes with mB ≥ 6.5. For events with M ≥ 6.5, nearly 48% of the events had no aftershocks and only 15% of the events had more than five aftershocks within one week of the mainshock. Events with more than ten aftershocks are located in regions associated with bends in the subducted slab. Focal mechanism solutions for intermediate-depth earthquakes with M > 6.8 can be grouped into four categories: (1) Normal-fault events (44%), and (2) reverse-fault events (33%), both with a strike nearly parallel to the trench axis. (3) Normal or reverse-fault events with a strike significantly oblique to the trench axis (10%), and (4) tear-faulting events (13%). The focal mechanisms of type 1 events occur mainly along strongly or moderately coupled subduction zones where a down-dip extensional stress prevails in a gently dipping plate. In contrast, along decoupled subduction zones great normal-fault earthquakes occur at shallow depths (e.g., the 1977 Sumbawa earthquake in Indonesia). Type

  8. Scaling and Nucleation in Models of Earthquake Faults

    SciTech Connect

    Klein, W.; Ferguson, C.; Rundle, J.

    1997-05-01

    We present an analysis of a slider block model of an earthquake fault which indicates the presence of metastable states ending in spinodals. We identify four parameters whose values determine the size and statistical distribution of the {open_quotes}earthquake{close_quotes} events. For values of these parameters consistent with real faults we obtain scaling of events associated not with critical point fluctuations but with the presence of nucleation events. {copyright} {ital 1997} {ital The American Physical Society}

  9. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  10. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  11. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  12. Large historical earthquakes and tsunamis in a very active tectonic rift: the Gulf of Corinth, Greece

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Ioanna; Papadopoulos, Gerassimos

    2014-05-01

    The Gulf of Corinth is an active tectonic rift controlled by E-W trending normal faults with an uplifted footwall in the south and a subsiding hangingwall with antithetic faulting in the north. Regional geodetic extension rates up to about 1.5 cm/yr have been measured, which is one of the highest for tectonic rifts in the entire Earth, while seismic slip rates up to about 1 cm/yr were estimated. Large earthquakes with magnitudes, M, up to about 7 were historically documented and instrumentally recorded. In this paper we have compiled historical documentation of earthquake and tsunami events occurring in the Corinth Gulf from the antiquity up to the present. The completeness of the events reported improves with time particularly after the 15th century. The majority of tsunamis were caused by earthquake activity although the aseismic landsliding is a relatively frequent agent for tsunami generation in Corinth Gulf. We focus to better understand the process of tsunami generation from earthquakes. To this aim we have considered the elliptical rupture zones of all the strong (M≥ 6.0) historical and instrumental earthquakes known in the Corinth Gulf. We have taken into account rupture zones determined by previous authors. However, magnitudes, M, of historical earthquakes were recalculated from a set of empirical relationships between M and seismic intensity established for earthquakes occurring in Greece during the instrumental era of seismicity. For this application the macroseismic field of each one of the earthquakes was identified and seismic intensities were assigned. Another set of empirical relationships M/L and M/W for instrumentally recorded earthquakes in the Mediterranean region was applied to calculate rupture zone dimensions; where L=rupture zone length, W=rupture zone width. The rupture zones positions were decided on the basis of the localities of the highest seismic intensities and co-seismic ground failures, if any, while the orientation of the maximum

  13. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: State of Gujarat, India

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Kossobokov, Vladimir; Parvez, Imtiyaz

    2016-04-01

    The Gujarat state of India is one of the most seismically active intercontinental regions of the world. Historically, it has experienced many damaging earthquakes including the devastating 1819 Rann of Kutch and 2001 Bhuj earthquakes. The effect of the later one is grossly underestimated by the Global Seismic Hazard Assessment Program (GSHAP). To assess a more adequate earthquake hazard for the state of Gujarat, we apply Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter recurrence relation taking into account naturally fractal distribution of earthquake loci. USLE has evident implications since any estimate of seismic hazard depends on the size of the territory considered and, therefore, may differ dramatically from the actual one when scaled down to the proportion of the area of interest (e.g. of a city) from the enveloping area of investigation. We cross compare the seismic hazard maps compiled for the same standard regular grid 0.2°×0.2° (i) in terms of design ground acceleration (DGA) based on the neo-deterministic approach, (ii) in terms of probabilistic exceedance of peak ground acceleration (PGA) by GSHAP, and (iii) the one resulted from the USLE application. Finally, we present the maps of seismic risks for the state of Gujarat integrating the obtained seismic hazard, population density based on 2011 census data, and a few model assumptions of vulnerability.

  14. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  15. Afterslip Distribution of Large Earthquakes Using Viscoelastic Media

    NASA Astrophysics Data System (ADS)

    Sato, T.; Higuchi, H.

    2009-12-01

    One of important parameters in simulations of earthquake generation is frictional properties of faults. To investigate the frictional properties, many authors studied coseismic slip and afterslip distribution of large plate interface earthquakes using coseismic and post seismic surface deformation by GPS data. Most of these studies used elastic media to get afterslip distribution. However, the effect of viscoelastic relaxation at the asthenosphere is important on post seismic surface deformation (Matsu’ura and Sato, GJI, 1989; Sato and Matsu’ura, GJI, 1992). Therefore, the studies using elastic media did not estimate correct afterslip distribution because they forced the cause of surface deformation on only afterslips at plate interface. We estimate afterslip distribution of large interplate earthquakes using viscoelastic media. We consider not only viscoelastic responses of coseismic slip but also viscoelastic responses of afterslips. Because many studies suggested that the magnitude of afterslips was comparable to that of coseismic slip, viscoelastic responses of afterslips can not be negligible. Therefore, surface displacement data include viscoelastic response of coseismic slip, viscoelastic response of afterslips which occurred just after coseismic period to just before the present, and elastic response of the present afterslip. We estimate afterslip distribution for the 2003 Tokachi-oki earthquake, Hokkaido, Japan using GPS data by GSI, Japan. We use CAMP model (Hashimoto et al, PAGEOPH, 2004) as a plate interface between the Pacific plate and the North American plate. The viscoelastic results show clearer that afterslips distribute on areaes where the coseismic slip does not occur. The viscoelastic results also show that the afterslips concentrate deeper parts of the plate interface at the eastern adjoining area of the 2003 Tokachi-oki earthquake.

  16. The Strain Energy, Seismic Moment and Magnitudes of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2004-12-01

    The strain energy Est, as potential energy, released by an earthquake and the seismic moment Mo are two fundamental physical earthquake parameters. The earthquake rupture process ``represents'' the release of the accumulated Est. The moment Mo, first obtained in 1966 by Aki, revolutioned the quantification of earthquake size and led to the elimination of the limitations of the conventional magnitudes (originally ML, Richter, 1930) mb, Ms, m, MGR. Both Mo and Est, not in a 1-to-1 correspondence, are uniform measures of the size, although Est is presently less accurate than Mo. Est is partitioned in seismic- (Es), fracture- (Eg) and frictional-energy Ef, and Ef is lost as frictional heat energy. The available Est = Es + Eg (Aki and Richards (1980), Kostrov and Das, (1988) for fundamentals on Mo and Est). Related to Mo, Est and Es, several modern magnitudes were defined under various assumptions: the moment magnitude Mw (Kanamori, 1977), strain energy magnitude ME (Purcaru and Berckhemer, 1978), tsunami magnitude Mt (Abe, 1979), mantle magnitude Mm (Okal and Talandier, 1987), seismic energy magnitude Me (Choy and Boatright, 1995, Yanovskaya et al, 1996), body-wave magnitude Mpw (Tsuboi et al, 1998). The available Est = (1/2μ )Δ σ Mo, Δ σ ~=~average stress drop, and ME is % \\[M_E = 2/3(\\log M_o + \\log(\\Delta\\sigma/\\mu)-12.1) ,\\] % and log Est = 11.8 + 1.5 ME. The estimation of Est was modified to include Mo, Δ and μ of predominant high slip zones (asperities) to account for multiple events (Purcaru, 1997): % \\[E_{st} = \\frac{1}{2} \\sum_i {\\frac{1}{\\mu_i} M_{o,i} \\Delta\\sigma_i} , \\sum_i M_{o,i} = M_o \\] % We derived the energy balance of Est, Es and Eg as: % \\[ E_{st}/M_o = (1+e(g,s)) E_s/M_o , e(g,s) = E_g/E_s \\] % We analyzed a set of about 90 large earthquakes and found that, depending on the goal these magnitudes quantify differently the rupture process, thus providing complementary means of earthquake characterization. Results for some

  17. Fast rupture propagation for large strike-slip earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Mori, Jim; Koketsu, Kazuki

    2016-04-01

    Studying rupture speeds of shallow earthquakes is of broad interest because it has a large effect on the strong near-field shaking that causes damage during earthquakes, and it is an important parameter that reflects stress levels and energy on a slipping fault. However, resolving rupture speed is difficult in standard waveform inversion methods due to limited near-field observations and the tradeoff between rupture speed and fault size for teleseismic observations. Here we applied back-projection methods to estimate the rupture speeds of 15 Mw ≥ 7.8 dip-slip and 8 Mw ≥ 7.5 strike-slip earthquakes for which direct P waves are well recorded in Japan on Hi-net, or in North America on USArray. We found that all strike-slip events had very fast average rupture speeds of 3.0-5.0 km/s, which are near or greater than the local shear wave velocity (supershear). These values are faster than for thrust and normal faulting earthquakes that generally rupture with speeds of 1.0-3.0 km/s.

  18. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  19. Low-frequency source parameters of twelve large earthquakes

    NASA Astrophysics Data System (ADS)

    Harabaglia, Paolo

    1993-06-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  20. Analysis concepts for large telescope structures under earthquake load

    NASA Astrophysics Data System (ADS)

    Koch, Franz

    1997-03-01

    The very large telescope (VLT) of ESO will be placed on Cerro Paranal in the Atacama desert in northern Chile. This site provides excellent conditions for astronomical observations. However, it is likely that important seismic activities occur. The telescope structure and its components have to resist the largest earthquakes expected during their lifetime. Therefore, design specifications and structural analyses have to take into account loads caused by such earthquakes. The present contribution shows some concepts and techniques in the assessment of earthquake resistant telescope design by the finite element method (FEM). After establishing the general design criteria and the geological and geotechnical characteristics of the site location, the seismic action can be defined. A description of various representations of the seismic action and the procedure to define the commonly used response spectrum are presented in more detail. A brief description of the response spectrum analysis method and of the result evaluation procedure follows. Additionally, some calculation concepts for parts of the entire telescope structure under seismic loads are provided. Finally, a response spectrum analysis of the entire VLT structure performed at ESO is presented to show a practical application of the analysis method and evaluation procedure mentioned above.

  1. Premonitory patterns of seismicity months before a large earthquake: Five case histories in Southern California

    PubMed Central

    Keilis-Borok, V. I.; Shebalin, P. N.; Zaliapin, I. V.

    2002-01-01

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with “intermediate” lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1–7 months five large (M ≥ 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns. PMID:12482945

  2. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  3. Slow earthquakes with duration of about 100 s suggested by the scaling law

    NASA Astrophysics Data System (ADS)

    Ide, S.; Imanishi, K.; Yoshida, Y.

    2007-12-01

    Slow earthquakes in western Japan are considered as a group of interplate slip events that obey the scaling law proposed by Ide et al. (2007), in which the seismic moment is proportional to the event duration. However, the population of events in this group is not continuous. In the Nankai slow earthquake zone, we have found deep low-frequency earthquakes (LFE) below 1 s, very low-frequency earthquakes (VLF, Ito et al., 2006) between 20-50 s, and slow slip events (SSE) above a few days. Are there any slow events other than these? If a slow earthquake that satisfies the scaling relation with duration of about 100 s occurs within the Nankai slow earthquake zone, it is observable at low-noise stations with a vertical broadband sensor only if they are located near the maximum direction of the near-field signal. One station that satisfies these conditions is F-net KIS with STS-1 seismometers, maintained by National Research Institute for Earth Science and Disaster Prevention, Japan. This station records tremor activities a few times per year since 1996. During most of the activities, we can detect many large low-frequency signals. Longer events include VLFs and we can show that some previously reported VLFs are actually a part of a longer event. We installed a temporary observation station at 15 km from KIS station and recorded a sequence of low frequency tremor for July 17-20, 2007. Although the low frequency signals are visible at two stations, the amplitudes are quite different, which suggests that we can determine the location and orientation of the source using a small dense array of broadband seismometers. As expected, the moment magnitudes of 100 s events are around 4, which satisfy the scaling relation of slow earthquake. Existence of much larger and longer events is implied from the records of KIS, although large low- frequency noise less than 3 mHz impedes reliable judgments. The existence of such events suggests that any size of slow earthquakes may occur

  4. Earthquake magnitude calculation without saturation from the scaling of peak ground displacement

    NASA Astrophysics Data System (ADS)

    Melgar, Diego; Crowell, Brendan W.; Geng, Jianghui; Allen, Richard M.; Bock, Yehuda; Riquelme, Sebastian; Hill, Emma M.; Protti, Marino; Ganas, Athanassios

    2015-07-01

    GPS instruments are noninertial and directly measure displacements with respect to a global reference frame, while inertial sensors are affected by systematic offsets—primarily tilting—that adversely impact integration to displacement. We study the magnitude scaling properties of peak ground displacement (PGD) from high-rate GPS networks at near-source to regional distances (~10-1000 km), from earthquakes between Mw6 and 9. We conclude that real-time GPS seismic waveforms can be used to rapidly determine magnitude, typically within the first minute of rupture initiation and in many cases before the rupture is complete. While slower than earthquake early warning methods that rely on the first few seconds of P wave arrival, our approach does not suffer from the saturation effects experienced with seismic sensors at large magnitudes. Rapid magnitude estimation is useful for generating rapid earthquake source models, tsunami prediction, and ground motion studies that require accurate information on long-period displacements.

  5. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  6. Earthquake Apparent Stress Scaling for the 1999 Hector Mine Sequence

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.

    2003-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of studies finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Other studies find the apparent stress increases with magnitude (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We try to improve upon earlier results by using consistent techniques over common paths for a wide range of sizes and seismic phases. We have examined about 130 earthquakes from the Hector Mine earthquake sequence in Southern California. These earthquakes range in size from the October 16,1999 Mw=7.1 mainshock down to ML=3.0 aftershocks into 2000. The mainshock has unclipped Pg and Lg phases at a number of high quality regional stations (e.g. CMB, ELK, TUC) where we can use the common path to examine apparent stress scaling relations directly. We are careful to avoid any event selection bias that would be related to apparent stress values. We fix each stations path correction using the independent moment and energy estimates for the mainshock. We then use those corrections to determine the seismic energy for each event based on regional Lg spectra. We use a modeling technique (MDAC) based on a modified Brune (1970) spectral shape but without any assumptions of corner-frequency scaling (Walter and Taylor, 2002). We perform similar analysis using the Pg spectra. We find the energy estimates for the same events are consistent for Lg estimates, Pg estimates and the estimates using the independent regional coda envelope technique (Mayeda and Walter, 1996; Mayeda et al

  7. The Energetics of Large Shallow and Deep Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2002-05-01

    Large earthquakes occur mostly as complex processes with inhomgeneities of variable strength and size, and also in different tectonic regimes. As a result, the released strain energy (Est) can significantly vary in individual events with about the same seismic moment M0, i.e. no 1-to-1 correspondence between them. We quantify the energetic balance of earthquakes in terms of Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) for 75 large earthquakes (Mw >= 7) with more accurate source parameters. Based on an extended Hamilton's principle, which considers nonconservative forces and any forces not accounted for in the potential energy function, and assuming complete stress drop we estimate Est using the approach of Purcaru (EOS, 1997, 78, 481). The events are from thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental classes. The energetic balance is determined from: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, and Est and Es are not in a 1-to-1 correspondence. In the Est-budget: (1) larger Es (i.e. more energetic) is radiated by deep, in-slab, strike slip and some continental events, (2) the interplate thrust events in subduction zones show a relatively balanced partition of Es and Eg and (3) small Es, and much larger Eg, is found for slow events and tsunamis. A reference class for which Es and Eg are comparable is suggested. Our results are consistent with those of other authors (Kikuchi, 1992; Choy and Boatwright, 1995; Newman and Okal, 1998). The average stress drop shows significant variability, even for events from the thrust class. The strong variation of stress drop of localized regions of the rupture area we found to play a major role in partitioning the released strain energy.

  8. Climate Regime Controls Fluvial Evacuation of Sediment Mobilized by Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, J.; Jin, Z.; Hilton, R. G.; Zhang, F.; Densmore, A. L.; Li, G.; West, A. J.

    2014-12-01

    Large earthquakes in active mountain belts can trigger landslides which mobilize large volumes of clastic sediment. Delivery of this material to river channels may result in aggradation and flooding, while sediment residing on hillslopes may increase the likelihood of subsequent landslides and debris flows. Despite this recognition, the controls on the residence time of coseismic landslide sediment in river catchments remain poorly understood. Here we assess the residence time of fine-grained (<0.25 mm) landslide sediment mobilized by the 2008 Mw 7.9 Wenchuan earthquake, China, using suspended sediment fluxes measured in 16 river catchments from 2006-2012. Following the earthquake, suspended sediment flux was elevated 3 to 7 times, consistent with observations of dilution of 10Be concentrations in detrital quartz (West et al., 2014). However, the total 2008-2012 export was much less than input of fine-grained sediment by coseismic landslides determined by area-volume scaling and deposit grain-size distributions. Estimates of the residence time of fine-grained sediment in the affected river catchments range from <1 to >100 years at the present export rate. We show that the residence time is proportional to the extent of coseismic landsliding, and inversely proportional to the frequency of intense runoff events. Together with previously reported observations from the 1999 Chi-Chi earthquake in Taiwan, our results demonstrate the importance of climate in setting the length of time that river systems are impacted by large earthquakes. References: West et al., 2014, Earth Planet Sc. Lett., 396, 143-153.

  9. Recurrence time distributions of large earthquakes in conceptual model studies

    NASA Astrophysics Data System (ADS)

    Zoeller, G.; Hainzl, S.

    2007-12-01

    The recurrence time distribution of large earthquakes in seismically active regions is a crucial ingredient for seismic hazard assessment. However, due to sparse observational data and a lack of knowledge on the precise mechanisms controlling seismicity, this distribution is unknown. In many practical applications of seismic hazard assessment, the Brownian passage time (BPT) distribution (or a different distribution) is fitted to a small number of observational recurrence times. Here, we study various aspects of recurrence time distributions in conceptual models of individual faults and fault networks: First, the dependence of the recurrence time distribution on the fault interaction is investigated by means of a network of Brownian relaxation oscillators. Second, the Brownian relaxation oscillator is modified towards a model for large earthquakes, taking into account also the statistics of intermediate events in a more appropriate way. This model simulates seismicity in a fault zone consisting of a major fault and some surrounding smaller faults with Gutenberg-Richter type seismicity. This model can be used for more realistic and robust estimations of the real recurrence time distribution in seismic hazard assessment.

  10. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  11. Earthquake!

    ERIC Educational Resources Information Center

    Markle, Sandra

    1987-01-01

    A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)

  12. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...

  13. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  14. Earthquake source parameters and scaling relationships from microseismicity at TauTona Gold Mine, South Africa

    NASA Astrophysics Data System (ADS)

    Moyer, P. A.; Boettcher, M. S.

    2012-12-01

    The issue of earthquake source scaling continues to draw considerable debate within the seismological community. Findings that both support and refute the claim that systematic differences between the source processes of small and large earthquakes may exist, motivate the study of how source parameters, such as seismic moment, corner frequency, radiated seismic energy, and apparent stress, scale over a wide range of magnitudes. To address this question, we are conducting a compressive examination of earthquake source parameters from microseismicity recorded at the TauTona gold mine in South Africa. At the TauTona gold mine, hundreds to thousands of earthquakes are recorded everyday within a few meters to kilometers of seismometers installed at depth throughout the mine. This high-rate of seismicity and close proximity to the recording instruments provides the ideal location and dataset to investigate source parameters and scaling relationships for earthquakes with a wide magnitude range of -4 < Mw < 4. We focus our investigation on earthquakes recorded during mining quiet hours to minimize blasts and rockburts in our catalog, and focus on earthquakes that occurred along the Pretorious Fault, the largest fault system running through the mine, to evaluate source parameters of fault zone earthquakes. The mine seismic network operated by the Institute of Mine Seismology (IMS) with a sample rate range of 3 - 2000 Hz has been enhanced by a tight array of high-quality instruments deployed in the Pretorious Fault Zone at the deepest part of the mine (~3.6 km depth) as part of the Natural Laboratory in South African Mines (NELSAM). The NELSAM network includes 3 strong-motion accelerometers, 5 weak-motion accelerometers, and 3 geophones with a combined sample rate range of 6 - 12 kHz that allows us to reliably constrain corner frequencies of very small earthquakes. We use spectral analysis techniques and an omega-squared source model determined by an Empirical Green

  15. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  16. Earthquake Source Scaling and Wave Propagation in Eastern North America: The Au Sable Forks, NY, Earthquake

    NASA Astrophysics Data System (ADS)

    Viegas, G.; Abercrombie, R.; Baise, L.; Kim, W.

    2005-12-01

    The 2002, M5 Au Sable Forks, NY earthquake and aftershocks are the best recorded sequence in the North Eastern USA. We use the local and regional recordings to investigate the characteristics of intraplate seismicity, focusing on source scaling relationships and regional wave propagation. A portable local network of 11 stations, recorded 74 aftershocks of M<3.2. We relocate the mainshock and early aftershocks using a master event technique. We then use the double-difference relocation method using differential travel times measured from waveform cross-correlation to relocate the aftershocks recorded by the local network. Both the master-event and double-difference location methods produce consistent results suggesting complex conjugate faulting during the sequence. We identify a number of highly clustered groups of earthquakes suitable for EGF analysis. We use the EGF method to calculate the stress drop and radiated energy of the larger aftershocks to determine how they compare to moderate magnitude earthquakes, and also whether they differ significantly from interplate earthquakes. We consider the 9 largest aftershocks (M3.7 to M2), which were recorded on the regional network, as potential EGFs for the mainshock, but they have focal mechanisms and locations that are sufficiently different that we cannot resolve the mainshock source time function well. They are good enough to enable us to place constraints on the shape and duration of the source pulse to use in modeling the regional waveforms. We investigate the crustal structure in New York (Grenville) and New England (Appalachian) through forward modeling of the Au Sable Forks regional broadband records. We compute synthetic records of wave propagation in a layered medium, using published crustal models of the two regions as a starting point. We identify differences between the recorded data and synthetics for the Grenville and the Appalachian regions and improve the crustal models to better fit the recorded

  17. The Evolution of Regional Seismicity Between Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bowman, D.; King, G.

    We describe a simple model that links static stress (Coulomb) modeling to the re- gional seismicity around a major fault. Unlike conventional Coulomb stress tech- niques, which calculate stress changes, we model the evolution of the stress field rela- tive to the failure stress. Background seismicity is attributed to inhomogeneities in the stress field which are created by adding a random field that creates local regions above the failure stress. The inhomogeneous field is chosen such that when these patches fail, the resulting earthquake size distribution follows a Gutenburg-Richter law. Im- mediately following a large event, the model produces regions of increased seismicity where the overall stress field has been elevated (aftershocks) and regions of reduced seismicity where the stress field has been reduced (stress shadows). The high stress levels in the aftershock regions decrease due to loading following the main event. Combined with the stress shadow from the main event, this results in a broad seismi- cally quiet region of lowered stress around the epicenter. Pre-event seismicity appears as the original stress shadows finally fill as a result of loading. The increase in seismic- ity initially occurs several fault lengths away from the main fault and moves inward as the event approaches. As a result of this effect, the seismic moment release in the region around the future epicenter increases as the event approaches. Synthetic cat- alogues generated by this model are virtually indistinguishable from real earthquake sequences in California and Washington.

  18. Earthquake Monitoring at Different Scales with Seiscomp3

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Engels, F.

    2013-12-01

    In the last few years, the French National Network of Seismic Survey (BCSF-RENASS) had to modernize its old and aging earthquake monitoring system coming from an inhouse developement. After having tried and conducted intensive tests on several real time frameworks such as EarthWorm and Seiscomp3 we have finaly adopted in 2012 Seiscomp3. Our actual system runs with two pipelines in parallel: the first one is tuned at a global scale to monitor the world seismicity (for event's magnitude > 5.5) and the second one is tuned at a national scale for the monitoring of the metropolitan France. The seismological stations used for the "world" pipeline are coming mainly from Global Seismographic Network (GSN), whereas for the "national" pipeline the stations are coming from the RENASS short period network and from the RESIF broadband network. More recently we have started to tune seiscomp3 at a smaller scale to monitor in real time the geothermal project (a R&D program in Deep Geothermal Energy) in the North-East part of France. Beside the use of the real time monitoring capabilities of Seiscomp3 we have also used a very handy feature to playback a 4 month length dataset at a local scale for the Rambervillers earthquake (22/02/2003, Ml=5.4) leading to the build of roughly 2000 aftershock's detections and localisations.

  19. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  20. Stress drop and source scaling of the 2009 April L'Aquila earthquakes

    NASA Astrophysics Data System (ADS)

    Calderoni, Giovanna; Rovelli, Antonio; Singh, Shri Krishna

    2013-01-01

    The empirical Green's function (EGF) technique is applied in the frequency domain to 962 broad-band seismograms (3.3 ≤ MW ≤ 6.1) to determine stress drop and source scaling of the 2009 April L'Aquila earthquakes. The station distance varies in the range 100-250 km from the source. Ground motions of several L'Aquila earthquakes are characterized by large azimuthal variations due to source directivity, even at low magnitudes. Thus, the individual-station stress-drop estimates are significantly biased when source directivity is not taken into account properly. To reduce the bias, we use single-station spectral ratios with pairs of earthquakes showing a similar degree of source directivity. The superiority of constant versus varying stress-drop models is assessed through minimization of misfit in a least-mean-square sense. For this analysis, seismograms of 26 earthquakes occurring within 10 km from the hypocentres of the three strongest shocks are used. We find that a source model where stress drop increases with the earthquake size has the minimum misfit: as compared to the best constant stress-drop model the improvement in the fit is of the order of 40 per cent. We also estimate the stress-drop scaling on a larger data set of 64 earthquakes, all of them having an independent estimate of seismic moment and consistent focal mechanism. An earthquake which shows no directivity is chosen as EGF event. This analysis confirms the former trend and yields individual-event stress drops very close to 10 MPa at magnitudes MW > 4.5 that decrease to 1 MPa, on the average, at the smallest magnitudes. A varying stress-drop scaling of L'Aquila earthquakes is consistent with results from other studies using EGF techniques but contrasts with results of authors that used inversion techniques to separate source from site and propagation effects. We find that there is a systematic difference for small events between the results of the two methods, with lower and less scattered values

  1. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  2. Using DART-recorded Rayleigh waves for rapid CMT and finite fault analyses of large megathrust earthquakes.

    NASA Astrophysics Data System (ADS)

    Thio, H. K.; Polet, J.; Ryan, K. J.

    2015-12-01

    We study the use of long-period Rayleigh waves recorded by the DART-type ocean bottom pressure sensors. The determination of accurate moment and slip distribution after a megathrust subduction zone earthquake is essential for tsunami early warning. The two main reasons why the DART data are o interest to this problem are; 1 - contrary to the broadband data used in the early stages of earthquake analysis, the DART data do not saturate for large magnitude earthquakes, and 2 - DART stations are located offshore and thus often fill gaps in the instrumental coverage at local and regional distances. Thus, by including DART recorded Rayleigh waves into the rapid response systems we may be able to gain valuable time in determining accurate moment estimates and slip distributions needed for tsunami warning and other rapid response products. Large megathrust earthquakes are among the most destructive natural disasters in history but also pose a significant challenge real-time analysis. The scales involved in such large earthquakes, with ruptures as long as a thousand kilometers and durations of several minutes are formidable. There are still issues with rapid analysis at the short timescales, such as minutes after the event since many of the nearby seismic stations will saturate due to the large ground motions. Also, on the seaward side of megathrust earthquakes, the nearest seismic stations are often thousands of kilometers away on oceanic islands. The deployment of DART buoys can fill this gap, since these instruments do not saturate and are located close in on the seaward side of the megathrusts. We are evaluating the use of DART-recorded Rayleigh waves, by including them in the dataset used for Centroid Moment Tensor analyses, and by using the near-field DART stations to constrain source finiteness for megathrust earthquakes such as the recent Tohoku, Haida Gwaii and Chile earthquakes.

  3. Local near instantaneously dynamically triggered aftershocks of large earthquakes.

    PubMed

    Fan, Wenyuan; Shearer, Peter M

    2016-09-01

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks. PMID:27609887

  4. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models.

    PubMed

    Landes, François P; Lippiello, E

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics. PMID:27300821

  5. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models

    NASA Astrophysics Data System (ADS)

    Landes, François P.; Lippiello, E.

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  6. Spectral scaling of the aftershocks of the Tocopilla 2007 earthquake in northern Chile

    NASA Astrophysics Data System (ADS)

    Lancieri, M.; Madariaga, R.; Bonilla, F.

    2012-04-01

    We study the scaling of spectral properties of a set of 68 aftershocks of the 2007 November 14 Tocopilla (M 7.8) earthquake in northern Chile. These are all subduction events with similar reverse faulting focal mechanism that were recorded by a homogenous network of continuously recording strong motion instruments. The seismic moment and the corner frequency are obtained assuming that the aftershocks satisfy an inverse omega-square spectral decay; radiated energy is computed integrating the square velocity spectrum corrected for attenuation at high frequencies and for the finite bandwidth effect. Using a graphical approach, we test the scaling of seismic spectrum, and the scale invariance of the apparent stress drop with the earthquake size. To test whether the Tocopilla aftershocks scale with a single parameter, we introduce a non-dimensional number, ?, that should be constant if earthquakes are self-similar. For the Tocopilla aftershocks, Cr varies by a factor of 2. More interestingly, Cr for the aftershocks is close to 2, the value that is expected for events that are approximately modelled by a circular crack. Thus, in spite of obvious differences in waveforms, the aftershocks of the Tocopilla earthquake are self-similar. The main shock is different because its records contain large near-field waves. Finally, we investigate the scaling of energy release rate, Gc, with the slip. We estimated Gc from our previous estimates of the source parameters, assuming a simple circular crack model. We find that Gc values scale with the slip, and are in good agreement with those found by Abercrombie and Rice for the Northridge aftershocks.

  7. Earthquake triggering by slow earthquake propagation: the case of the large 2014 slow slip event in Guerrero, Mexico.

    NASA Astrophysics Data System (ADS)

    Radiguet, M.; Perfettini, H.; Cotte, N.; Gualandi, A.; Kostoglodov, V.; Lhomme, T.; Walpersdorf, A.; Campillo, M.; Valette, B.

    2015-12-01

    Since their discovery nearly two decades ago, the importance of slow slip events (SSEs) in the processes of strain accommodation in subduction zones has been revealed. Nevertheless, the influence of slow aseismic slip on the nucleation of large earthquakes remains unclear. In this study, we focus on the Guerrero region of the Central American subduction zone in Mexico, where large SSEs have been observed since 1998, with a recurrence period of about 4 years, and produce aseismic slip in the Guerrero seismic gap. We investigate the large 2014 SSE (equivalent Mw=7.7), which initiated in early 2014 and lasted until the end of October 2014. During this time period, the 18 April Papanoa earthquake (Mw7.2) occurred on the western limit of the Guerrero gap. We invert the continuous GPS time series using the PCAIM (Principal Component Analysis Inversion Method) to assess the space and time evolution of slip on the subduction. To focus on the aseismic processes, we correct the cGPS time series from the co-seismic offsets. Our results show that the slow slip event initiated in the Guerrero gap region, as already observed during the previous SSEs. The Mw7.2 Papanoa earthquake occurred on the western limit of the region that was slipping aseismically before the earthquake. After the Papanoa earthquake, the aseismic slip rate increases. This geodetic signal consists of both the ongoing SSE and the postseismic (afterslip) response due to the Papanoa earthquake. The majority of the post-earthquake aseismic slip is concentrated downdip from the main earthquake asperity, but significant slip is also observed in the Guerrero gap region. Compared to previous SSEs in that region, the 2014 SSE produced a larger aseismic slip and the maximum slip is located downdip from the main brittle asperity corresponding to the Papanoa earthquake, a region that was not identified as active during the previous SSEs. Since the Mw 7.2 Papanoa earthquake occurred about 2 months after the onset of the

  8. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  9. Source Scaling and Ground Motion of the 2008 Wells, Nevada, earthquake sequence

    NASA Astrophysics Data System (ADS)

    Yoo, S.; Dreger, D. S.; Mayeda, K. M.; Walter, W. R.

    2011-12-01

    Dynamic source parameters, such as a corner frequency, stress drop, and radiated energy, are one of the most critical factors controlling ground motions at higher-frequencies (generally greater than 1 Hz), which may cause damage to nearby surface structures. Hence, scaling relation of these parameters can play an important role in assessing the seismic hazard for regions in which records of ground motions from potentially damaging earthquakes are not available. On February 21, 2008 at 14:16 (UTC), a magnitude 6 earthquake occurred near Wells, Nevada, where characterized by low rate of seismicity. For their aftershocks, a marked discrepancy between the observed and predicted ground motions from empirical ground motion prediction equation was reported (Petersen et al., 2011). To evaluate and understand these observed ground motions, we investigate the dynamic source parameters and their scaling relation for this earthquake sequence. We estimate the source parameters of the earthquakes using the coda spectral ratio method (Mayeda et al., 2007) and examine the estimates with the observed spectral accelerations at higher frequencies. From the derived source parameters and scaling relation, we compute synthetic ground motions of the earthquakes using fractal composite source model (e.g., Zeng et al., 1994) and compare these synthetic ground motions with the observed ground motions and synthetic ground motions obtained from self-similar source scaling relation. In our preliminary results, we find the stress drops of the aftershocks are systematically 2-5 times lower than a stress drop of the mainshock. This agrees well with systematic overestimation of the predicted ground motions for the aftershocks. The simulated ground motions from the coda-derived scaling relation better explains the observed both weak and strong ground motions than that of from the size independent stress drop scaling relation. Assuming that the scale dependent stress drop is real, at least in some

  10. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  11. A bilinear source-scaling model for M-log a observations of continental earthquakes

    USGS Publications Warehouse

    Hanks, T.C.; Bakun, W.H.

    2002-01-01

    The Wells and Coppersmith (1994) M-log A data set for continental earthquakes (where M is moment magnitude and A is fault area) and the regression lines derived from it are widely used in seismic hazard analysis for estimating M, given A. Their relations are well determined, whether for the full data set of all mechanism types or for the subset of strike-slip earthquakes. Because the coefficient of the log A term is essentially 1 in both their relations, they are equivalent to constant stress-drop scaling, at least for M ??? 7, where most of the data lie. For M > 7, however, both relations increasingly underestimate the observations with increasing M. This feature, at least for strike-slip earthquakes, is strongly suggestive of L-model scaling at large M. Using constant stress-drop scaling (???? = 26.7 bars) for M ??? 6.63 and L-model scaling (average fault slip u?? = ??L, where L is fault length and ?? = 2.19 × 10-5) at larger M, we obtain the relations M = log A + 3.98 ?? 0.03, A ??? 537 km2 and M = 4/3 log A + 3.07 ?? 0.04, A > 537 km2. These prediction equations of our bilinear model fit the Wells and Coppersmith (1994) data set well in their respective ranges of validity, the transition magnitude corresponding to A = 537 km2 being M = 6.71.

  12. A scaling relationship between AE and natural earthquakes

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, N.; Kawakata, H.; Takahashi, N.

    2013-12-01

    seismic moments and the corner frequencies by grid search. The magnitude of AE events were estimated between -8 to -7. As a result, the relationship between the seismic moment and the corner frequency of AE also satisfied the same scaling relationship as shown for natural earthquakes. This indicates that AE in rock samples can be regarded as micro size earthquake. This finding shows the possibility to understand the developing processes of natural earthquake from laboratory experiments.

  13. Unusual behaviour of cows prior to a large earthquake

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano; Freund, Friedemann; Grant, Rachel

    2013-04-01

    Unusual behaviour of domestic cattle before earthquakes has been reported for centuries, and often relates to cattle becoming excited, vocal, aggressive or attempting to break free of tethers and restraints. Cattle have also been reported to move to higher or lower ground before earthquakes. Here, we report unusual movements of domestic cows 2 days prior to the Marche-Umbria (M=6) earthquake in 1997. Cows moved down from their usual summer pastures in the hills and were seen in the streets of a nearby town, a highly unusual occurrence. We discuss this in the context of positive holes and air ionisation as proposed by Freund's unified theory of earthquake precursors.

  14. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    PubMed

    duPont, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  15. Nucleation of Laboratory Earthquakes: Observation, Characterization, and Scaling up to the Natural Earthquakes Dimensions

    NASA Astrophysics Data System (ADS)

    Latour, S.; Schubnel, A.; Nielsen, S. B.; Madariaga, R. I.; Vinciguerra, S.

    2013-12-01

    In this work we observe the nucleation phase of in-plane ruptures in the laboratory and characterize its dynamics. We use a laboratory toy-model, where mode II shear ruptures are produced on a pre-cut fault in a plate of polycarbonate. The fault is cut at the critical angle that allows a stick-slip behavior under uniaxal loading. The ruptures are thus naturally nucleated. The material is birefringent under stress, so that the rupture propagation can be followed by ultra-rapid elastophotometry. A network of acoustic sensors and accelerometers is disposed on the plate to measure the radiated wavefield and record laboratory near-field accelograms. The far field stress level is also measured using strain gages. We show that the nucleation is composed of two distinct phases, a quasi-static and an acceleration stage, followed by dynamic propagation. We propose an empirical model which describes the rupture length evolution: the quasi-static phase is described by an exponential growth while the acceleration phase is described by an inverse power law of time. The transition from quasistatic to accelerating rupture is related to the critical nucleation length, which scales inversely with normal stress in accordance with theoretical predictions, and to a critical surfacic power, which may be an intrinsic property of the interface. Finally, we discuss these results in the frame of previous studies and propose a scaling up to natural earthquake dimensions. Three spontaneously nucleated laboratory earthquakes at increasingly higher normal pre-stresses, visualized by photo-elasticity. The red curves highlight the position of rupture tips as a function of time. We propose an empirical model that describes the dynamics of rupture nucleation and discuss its scaling with the initial normal stress.

  16. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  17. A search for long-term periodicities in large earthquakes of southern and coastal central California

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1990-01-01

    It has been occasionally suggested that large earthquakes may follow the 8.85-year and 18.6-year lunar-solar tidal cycles and possibly the approximately 11-year solar activity cycle. From a new study of earthquakes with magnitudes greater than 5.5 in southern and coastal central California during the years 1855-1983, it is concluded that, at least in this selected area of the world, no statistically significant long-term periodicities in earthquake frequency occur. The sample size used is about twice that used in comparable earlier studies of this region, which concentrated on large earthquakes.

  18. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Shuler, Ashley

    2014-05-01

    Rifting episodes accommodate the relative motion of mature divergent plate boundaries with sequences of magma-filled dikes that compensate for the missing volume due to crustal splitting. Two major rifting episodes have been recorded since modern monitoring techniques are available: the 1975-1984 Krafla (Iceland) and the 2005-2010 Manda-Hararo (Ethiopia) dike sequences. The statistical properties of the frequency of dike intrusions during rifting have never been investigated in detail, but it has been suggested that they may have similarities with earthquake mainshock-aftershock sequences, for example they start with a large intrusion followed by several events of smaller magnitude. The scaling relationships of earthquakes have on the contrary been widely investigated: earthquakes have been found to follow a power law, the Gutenberg-Richter relation, from local to global scale, while the decay of aftershocks with time has been found to follow the Omori law. These statistical laws for earthquakes are the basis for hazard evaluation and the physical mechanisms behind them are the object of wide interest and debate. Here we investigate in detail the statistics of dikes from the Krafla and Manda-Hararo rifting episodes, including their frequency-magnitude distribution, the release of geodetic moment in time, the correlation between interevent times and intruded volumes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, the long-term release of geodetic moment is governed by a relationship consistent with the Omori law, and the intrusions are roughly time-predictable. The need of magma availability affects however the timing of secondary dike intrusions: such timing is longer after large volume intrusions, contrarily to aftershock sequences where interevent times shorten after large events.

  19. Oceanic transform fault earthquake nucleation process and source scaling relations - A numerical modeling study with rate-state friction (Invited)

    NASA Astrophysics Data System (ADS)

    Liu, Y.; McGuire, J. J.; Behn, M. D.

    2013-12-01

    We use a three-dimensional strike-slip fault model in the framework of rate and state-dependent friction to investigate earthquake behavior and scaling relations on oceanic transform faults (OTFs). Gabbro friction data under hydrothermal conditions are mapped onto OTFs using temperatures from (1) a half-space cooling model, and (2) a thermal model that incorporates a visco-plastic rheology, non-Newtonian viscous flow and the effects of shear heating and hydrothermal circulation. Without introducing small-scale frictional heterogeneities on the fault, our model predicts that an OTF segment can transition between seismic and aseismic slip over many earthquake cycles, consistent with the multimode hypothesis for OTF ruptures. The average seismic coupling coefficient χ is strongly dependent on the ratio of seismogenic zone width W to earthquake nucleation size h*; χ increases by four orders of magnitude as W/h* increases from ~ 1 to 2. Specifically, the average χ = 0.15 +/- 0.05 derived from global OTF earthquake catalogs can be reached at W/h* ≈ 1.2-1.7. The modeled largest earthquake rupture area is less than the total seismogenic area and we predict a deficiency of large earthquakes on long transforms, which is also consistent with observations. Earthquake magnitude and distribution on the Gofar (East Pacific Rise) and Romanche (equatorial Mid-Atlantic) transforms are better predicted using the visco-plastic model than the half-space cooling model. We will also investigate how fault gouge porosity variation during an OTF earthquake nucleation phase may affect the seismic wave velocity structure, for which up to 3% drop was observed prior to the 2008 Mw6 Gofar earthquake.

  20. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  1. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  2. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  3. Very short-term earthquake precursors from GPS signal interference: Case studies on moderate and large earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, Yu-Lien; Cheng, Kai-Chien; Wang, Wei-Hau; Yu, Shui-Beih

    2016-04-01

    We set up a GPS network with 17 Continuous GPS (CGPS) stations in southwestern Taiwan to monitor real-time crustal deformation. We found that systematic perturbations in GPS signals occurred just a few minutes prior to the occurrence of several moderate and large earthquakes, including the recent 2013 Nantou (ML = 6.5) and Rueisuei (ML = 6.4) earthquakes in Taiwan. The anomalous pseudorange readings were several millimeters higher or lower than those in the background time period. These systematic anomalies were found as a result of interference of GPS L-band signals by electromagnetic emissions (EMs) prior to the mainshocks. The EMs may occur in the form of harmonic or ultra-wide-band radiation and can be generated during the formation of Mode I cracks at the final stage of earthquake nucleation. We estimated the directivity of the likely EM sources by calculating the inner product of the position vector from a GPS station to a given satellite and the vector of anomalous ground motions recorded by the GPS. The results showed that the predominant inner product generally occurred when the satellite was in the direction either toward or away from the epicenter with respect to the GPS network. Our findings suggest that the GPS network may serve as a powerful tool to detect very short-term earthquake precursors and presumably to locate a large earthquake before it occurs.

  4. Lotung large-scale seismic test strong motion records

    SciTech Connect

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  5. New model on the relations between surface uplift and erosion caused by large, compressional earthquakes

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2015-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new, seismologically consistent model of earthquake induced landsliding allows us to explore the importance of parameters such as earthquake depth and landscape steepness. In order to assess the earthquake mass balance for various scenarios, we have compared the expected eroded volume with co-seismic surface uplift computed with Okada's deformation theory. We have found the earthquake depth and landscape steepness to be dominant parameters compared to the fault geometry (dip and rake). In contrast with previous studies we have found that the largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. We have explored the long term evolution of topography under seismic forcing, with a Gutenberg Richter distribution or a characteristic earthquake model, on a fault system with different geometries and tectonic styles, such as transpressive or flat-and-ramp geometry, with thinned or thickened seismogenic layer.

  6. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  7. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  8. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    PubMed Central

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  9. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ??? 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change ???????? ??? 0.01 MPa) the Ms ??? 7.0 shocks are associated with calculated shear stress increases, while ???39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ???7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ??? 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  10. Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, Tom

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  11. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    NASA Astrophysics Data System (ADS)

    Parsons, Tom

    2002-09-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ˜7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  12. Scaling and critcal phenomena in a cellular automaton slider-block model for earthquakes

    SciTech Connect

    Rundle, J.B. ); Klein, W. )

    1993-07-01

    The dynamics of a general class of two-dimensional cellular automaton slider-block models of earthquake faults is studied as a function of the failure rules that determine slip and the nature of the failure threshold. Scaling properties of clusters of failed sites imply the existence of a mean-field spinodal line in systems with spatially random failure thresholds, whereas spatially uniform failure thresholds produce behavior reminiscent of self-organized critical behavior. This model can describe several classes of faults, ranging from those that only exhibit creep to those that produce large events. 16 refs., 4 figs.

  13. The quest for better quality-of-life - learning from large-scale shaking table tests

    NASA Astrophysics Data System (ADS)

    Nakashima, M.; Sato, E.; Nagae, T.; Kunio, F.; Takahito, I.

    2010-12-01

    Earthquake engineering has its origins in the practice of “learning from actual earthquakes and earthquake damages.” That is, we recognize serious problems by witnessing the actual damage to our structures, and then we develop and apply engineering solutions to solve these problems. This tradition in earthquake engineering, i.e., “learning from actual damage,” was an obvious engineering response to earthquakes and arose naturally as a practice in a civil and building engineering discipline that traditionally places more emphasis on experience than do other engineering disciplines. But with the rapid progress of urbanization, as society becomes denser, and as the many components that form our society interact with increasing complexity, the potential damage with which earthquakes threaten the society also increases. In such an era, the approach of ”learning from actual earthquake damages” becomes unacceptably dangerous and expensive. Among the practical alternatives to the old practice is to “learn from quasi-actual earthquake damages.” One tool for experiencing earthquake damages without attendant catastrophe is the large shaking table. E-Defense, the largest one we have, was developed in Japan after the 1995 Hyogoken-Nanbu (Kobe) earthquake. Since its inauguration in 2005, E-Defense has conducted over forty full-scale or large-scale shaking table tests, applied to a variety of structural systems. The tests supply detailed data on actual behavior and collapse of the tested structures, offering the earthquake engineering community opportunities to experience and assess the actual seismic performance of the structures, and to help society prepare for earthquakes. Notably, the data were obtained without having to wait for the aftermaths of actual earthquakes. Earthquake engineering has always been about life safety, but in recent years maintaining the quality of life has also become a critical issue. Quality-of-life concerns include nonstructural

  14. The characteristic of the building damage from historical large earthquakes in Kyoto

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  15. Some Considerations on a Large Landslide at the Left Bank of the Aratozawa Dam Caused by the 2008 Iwate-Miyagi Intraplate Earthquake

    NASA Astrophysics Data System (ADS)

    Aydan, Ömer

    2016-06-01

    The scale and impact of rock slope failures are very large and the form of failure differs depending upon the geological structures of slopes. The 2008 Iwate-Miyagi intraplate earthquake induced many large-scale slope failures, despite the magnitude of the earthquake being of intermediate scale. Among large-scale slope failures, the landslide at the left bank of the Aratozawa Dam site is of great interest to specialists of rock mechanics and rock engineering. Although the slope failure was of planar type, the direction of sliding was luckily towards the sub-valley, so that the landslide did not cause great tsunami-like motion of reservoir fluid. In this study, the author attempts to describe the characteristics of the landslide, strong motion and permanent ground displacement induced by the 2008 Iwate-Miyagi intraplate earthquake, which had great effects on the triggering and evolution of the landslide.

  16. Quiet zone within a seismic gap near western Nicaragua: Possible location of a future large earthquake

    USGS Publications Warehouse

    Harlow, D.H.; White, R.A.; Cifuentes, I.L.; Aburto, Q.A.

    1981-01-01

    A 5700-square-kilometer quiet zone occurs in the midst of the locations of more than 4000 earthquakes off the Pacific coast of Nicaragua. The region is indicated by the seismic gap technique to be a likely location for an earthquake of magnitude larger than 7. The quiet zone has existed since at least 1950; the last large earthquake originating from this area occurred in 1898 and was of magnitude 7.5. A rough estimate indicates that the magnitude of an earthquake rupturing the entire quiet zone could be as large as that of the 1898 event. It is not yet possible to forecast a time frame for the occurrence of such an earthquake in the quiet zone. Copyright ?? 1981 AAAS.

  17. Gravity Wave Disturbances in the F-Region Ionosphere Above Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bruff, Margie

    The direction of propagation, duration and wavelength of gravity waves in the ionosphere above large earthquakes were studied using data from the Super Dual Auroral Radar Network. Ground scatter data were plotted versus range and time to identify gravity waves as alternating focused and de-focused regions of radar power in wave-like patterns. The wave patterns before and after earthquakes were analyzed to determine the directions of propagation and wavelengths. Conditions were considered 48 hours before and after each identified disturbances to exclude waves from geomagnetic activity. Gravity waves were found travelling away from the epicenter before all six earthquakes for which data were available and after four of the six earthquakes. Gravity waves travelled in at least two directions away from the epicenter in all cases, and even stronger patterns were found for two earthquakes. Waves appeared, on average, 4 days before, persisting 2-3 hours, and 1-2 days after earthquakes, persisting 4-6 hours. Most wavelengths were between 200-300 km. We show a possible correlation between magnitude and depth of earthquakes and gravity wave patterns, but study of more earthquakes is required. This study provides a better understanding of the causes of ionospheric gravity wave disturbances and has potential applications for predicting earthquakes.

  18. In Japan, seismic waves slower after rain, large earthquakes

    NASA Astrophysics Data System (ADS)

    Schultz, Colin

    2012-03-01

    An earthquake is first detected by the abrupt side-to-side jolt of a passing primary wave. Lagging only slightly behind are shear waves, which radiate out from the earthquake's epicenter and are seen at the surface as a rolling wave of vertical motion. Also known as secondary or S waves, shear waves cause the lifting and twisting motions that are particularly effective at collapsing surface structures. With their capacity to cause damage, making sense of anything that can influence shear wave vertical velocities is important from both theoretical and engineering perspectives.

  19. The AD 365 earthquake: high resolution tsunami inundation for Crete and full scale simulation exercise

    NASA Astrophysics Data System (ADS)

    Kalligeris, N.; Flouri, E.; Okal, E.; Synolakis, C.

    2012-04-01

    In the eastern Mediterranean, historical and archaeological records document major earthquake and tsunami events in the past 2000 year (Ambraseys and Synolakis, 2010). The 1200km long Hellenic Arc has allegedly caused the strongest reported earthquakes and tsunamis in the region. Among them, the AD 365 and AD 1303 tsunamis have been extensively documented. They are likely due to ruptures of the Central and Eastern segments of the Hellenic Arc, respectively. Both events had widespread impact due to ground shaking, and e triggered tsunami waves that reportedly affected the entire eastern Mediterranean. The seismic mechanism of the AD 365 earthquake, located in western Crete, has been recently assigned a magnitude ranging from 8.3 to 8.5 by Shaw et al., (2008), using historical, sedimentological, geomorphic and archaeological evidence. Shaw et al (2008) have inferred that such large earthquakes occur in the Arc every 600 to 800 years, with the last known the AD 1303 event. We report on a full-scale simulation exercise that took place in Crete on 24-25 October 2011, based on a scenario sufficiently large to overwhelm the emergency response capability of Greece and necessitating the invocation of the Monitoring and Information Centre (MIC) of the EU and triggering help from other nations . A repeat of the 365 A.D. earthquake would likely overwhelm the civil defense capacities of Greece. Immediately following the rupture initiation it will cause substantial damage even to well-designed reinforced concrete structures in Crete. Minutes after initiation, the tsunami generated by the rapid displacement of the ocean floor would strike nearby coastal areas, inundating great distances in areas of low topography. The objective of the exercise was to help managers plan search and rescue operations, identify measures useful for inclusion in the coastal resiliency index of Ewing and Synolakis (2011). For the scenario design, the tsunami hazard for the AD 365 event was assessed for

  20. Formulation and Application of a Physically-Based Rupture Probability Model for Large Earthquakes on Subduction Zones: A Case Study of Earthquakes on Nazca Plate

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.

    2014-12-01

    Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically

  1. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  2. [The dental professional action and aim to the struggle for large earthquake].

    PubMed

    Li, Gang

    2008-06-01

    On May 12, 2008, a magnitude 8 earthquake struck the eastern Sichuan Province in China. The quake could be felt as far away as Bangkok, Thailand, Taiwan, Vietnam, Shanghai, and Beijing. Officials say that at least 69170 may have been killed and local reports indicate over 374159 injured till June 16, 2008. A study of the dental professional action to the struggle for the Sichuan large earthquake is reported. The dental professional action and aim to the struggle for large earthquake are discussed. It is believed that dental professional personals must make more specific contribution in quake-hit areas in the future via supplies of well organized services. PMID:18661058

  3. Three-dimensional distribution of ionospheric anomalies prior to three large earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    He, Liming; Heki, Kosuke

    2016-07-01

    Using regional Global Positioning System (GPS) networks, we studied three-dimensional spatial structure of ionospheric total electron content (TEC) anomalies preceding three recent large earthquakes in Chile, South America, i.e., the 2010 Maule (Mw 8.8), the 2014 Iquique (Mw 8.2), and the 2015 Illapel (Mw 8.3) earthquakes. Both positive and negative TEC anomalies, with areal extent dependent on the earthquake magnitudes, appeared simultaneously 20-40 min before the earthquakes. For the two midlatitude earthquakes (2010 Maule and 2015 Illapel), positive anomalies occurred to the north of the epicenters at altitudes 150-250 km. The negative anomalies occurred farther to the north at higher altitudes 200-500 km. This lets the epicenter, the positive and negative anomalies align parallel with the local geomagnetic field, which is a typical structure of ionospheric anomalies occurring in response to positive surface electric charges.

  4. Preliminary investigation of some large landslides triggered by the 2008 Wenchuan earthquake, Sichuan Province, China

    USGS Publications Warehouse

    Wang, F.; Cheng, Q.; Highland, L.; Miyajima, M.; Wang, Hongfang; Yan, C.

    2009-01-01

    The M s 8.0 Wenchuan earthquake or "Great Sichuan Earthquake" occurred at 14:28 p.m. local time on 12 May 2008 in Sichuan Province, China. Damage by earthquake-induced landslides was an important part of the total earthquake damage. This report presents preliminary observations on the Hongyan Resort slide located southwest of the main epicenter, shallow mountain surface failures in Xuankou village of Yingxiu Town, the Jiufengchun slide near Longmenshan Town, the Hongsong Hydro-power Station slide near Hongbai Town, the Xiaojiaqiao slide in Chaping Town, two landslides in Beichuan County-town which destroyed a large part of the town, and the Donghekou and Shibangou slides in Qingchuan County which formed the second biggest landslide lake formed in this earthquake. The influences of seismic, topographic, geologic, and hydro-geologic conditions are discussed. ?? 2009 Springer-Verlag.

  5. What controls the location where large earthquakes nucleate along the North Anatolian Fault ?

    NASA Astrophysics Data System (ADS)

    Bouchon, M.; Karabulut, H.; Schmittbuhl, J.; Durand, V.; Marsan, D.; Renard, F.

    2012-12-01

    We review several sets of observations which suggest that the location of the epicenters of the 1939-1999 sequence of large earthquakes along the NAF obeys some mechanical logic. The 1999 Izmit earthquake nucleated in a zone of localized crustal extension oriented N10E (Crampin et al., 1985; Evans et al., 1987), nearly orthogonal to the strike of the NAF, thus releasing the normal stress on the fault in the area and facilitating rupture nucleation. The 1999 Duzce epicenter, located about 25km from the end of the Izmit rupture, is precisely near the start of a simple linear segment of the fault (Pucci et al., 2006) where supershear rupture occurred (Bouchon et al., 2001, Konca et al., 2010). Aftershock locations of the Izmit earthquake in the region (Gorgun et al., 2009) show that Duzce, at its start, was the first significant Izmit aftershock to occur on this simple segment. The rupture nucleated on the part of this simple segment which had been most loaded in Coulomb stress by the Izmit earthquake. Once rupture of this segment began, it seems logical that the whole segment would break, as its simple geometry suggests that no barrier was present to arrest rupture. Rupture of this segment, in turn, led to the rupture of adjacent segments. Like the Izmit earthquake, the 1943 Tosya and the 1944 Bolu-Gerede earthquakes nucleated near a zone of localized crustal extension. The long-range delayed triggering of extensional clusters observed after the Izmit/Duzce earthquakes (Durand et al., 2010) suggests a possible long-range delayed triggering of the 1943 shock by the 1942 Niksar earthquake. The 1942, 1957 Albant and 1967 Mudurnu earthquake nucleation locations further suggest that like what is observed for the Duzce earthquake, the previous earthquake ruptures stopped when encountering geometrically complex segments and nucleated again, past these segments.

  6. The Diversity of Large Earthquakes and Its Implications for Hazard Mitigation

    NASA Astrophysics Data System (ADS)

    Kanamori, Hiroo

    2014-05-01

    With the advent of broadband seismology and GPS, significant diversity in the source radiation spectra of large earthquakes has been clearly demonstrated. This diversity requires different approaches to mitigate hazards. In certain tectonic environments, seismologists can forecast the future occurrence of large earthquakes within a solid scientific framework using the results from seismology and GPS. Such forecasts are critically important for long-term hazard mitigation practices, but because stochastic fracture processes are complex, the forecasts are inevitably subject to large uncertainty, and unexpected events will continue to surprise seismologists. Recent developments in real-time seismology will help seismologists to cope with and prepare for tsunamis and earthquakes. Combining a better understanding of earthquake diversity with modern technology is the key to effective and comprehensive hazard mitigation practices.

  7. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  8. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  9. Benefits of Earthquake Early Warning to Large Municipalities (Invited)

    NASA Astrophysics Data System (ADS)

    Featherstone, J.

    2013-12-01

    The City of Los Angeles has been involved in the testing of the Cal Tech Shake Alert, Earthquake Early Warning (EQEW) system, since February 2012. This system accesses a network of seismic monitors installed throughout California. The system analyzes and processes seismic information, and transmits a warning (audible and visual) when an earthquake occurs. In late 2011, the City of Los Angeles Emergency Management Department (EMD) was approached by Cal Tech regarding EQEW, and immediately recognized the value of the system. Simultaneously, EMD was in the process of finalizing a report by a multi-discipline team that visited Japan in December 2011, which spoke to the effectiveness of EQEW for the March 11, 2011 earthquake that struck that country. Information collected by the team confirmed that the EQEW systems proved to be very effective in alerting the population of the impending earthquake. The EQEW in Japan is also tied to mechanical safeguards, such as the stopping of high-speed trains. For a city the size and complexity of Los Angeles, the implementation of a reliable EQEW system will save lives, reduce loss, ensure effective and rapid emergency response, and will greatly enhance the ability of the region to recovery from a damaging earthquake. The current Shake Alert system is being tested at several governmental organizations and private businesses in the region. EMD, in cooperation with Cal Tech, identified several locations internal to the City where the system would have an immediate benefit. These include the staff offices within EMD, the Los Angeles Police Department's Real Time Analysis and Critical Response Division (24 hour crime center), and the Los Angeles Fire Department's Metropolitan Fire Communications (911 Dispatch). All three of these agencies routinely manage the collaboration and coordination of citywide emergency information and response during times of crisis. Having these three key public safety offices connected and included in the

  10. Repeating and not so Repeating Large Earthquakes in the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Singh, S.; Iglesias, A.; Perez-Campos, X.

    2013-12-01

    The rupture area and recurrence interval of large earthquakes in the mexican subduction zone are relatively small and almost the entire length of the zone has experienced a large (Mw≥7.0) earthquake in the last 100 years (Singh et al., 1981). Several segments have experienced multiple large earthquakes in this time period. However, as the rupture areas of events prior to 1973 are only approximately known, the recurrence periods are uncertain. Large earthquakes occurred in the Ometepec, Guerrero, segment in 1937, 1950, 1982 and 2012 (Singh et al., 1981). In 1982, two earthquakes (Ms 6.9 and Ms 7.0) occurred about 4 hours apart, one apparently downdip from the other (Astiz & Kanamori, 1984; Beroza et al. 1984). The 2012 earthquake on the other hand had a magnitude of Mw 7.5 (globalcmt.org), breaking approximately the same area as the 1982 doublet, but with a total scalar moment about three times larger than the 1982 doublet combined. It therefore seems that 'repeat earthquakes' in the Ometepec segment are not necessarily very similar one to another. The Central Oaxaca segment broke in large earthquakes in 1928 (Mw7.7) and 1978 (Mw7.7) . Seismograms for the two events, recorded at the Wiechert seismograph in Uppsala, show remarkable similarity, suggesting that in this area, large earthquakes can repeat. The extent to which the near-trench part of the fault plane participates in the ruptures is not well understood. In the Ometepec segment, the updip portion of the plate interface broke during the 25 Feb 1996 earthquake (Mw7.1), which was a slow earthquake and produced anomalously low PGAs (Iglesias et al., 2003). Historical records indicate that a great tsunamigenic earthquake, M~8.6, occurred in the Oaxaca region in 1787, breaking the Central Oaxaca segment together with several adjacent segments (Suarez & Albini 2009). Whether the updip portion of the fault broke in this event remains speculative, although plausible based on the large tsunami. Evidence from the

  11. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  12. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  13. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  14. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  15. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  16. Slip zone and energetics of a large earthquake from the Taiwan Chelungpu-fault Drilling Project.

    PubMed

    Ma, Kuo-Fong; Tanaka, Hidemi; Song, Sheng-Rong; Wang, Chien-Ying; Hung, Jih-Hao; Tsai, Yi-Ben; Mori, Jim; Song, Yen-Fang; Yeh, Eh-Chao; Soh, Wonn; Sone, Hiroki; Kuo, Li-Wei; Wu, Hung-Yu

    2006-11-23

    Determining the seismic fracture energy during an earthquake and understanding the associated creation and development of a fault zone requires a combination of both seismological and geological field data. The actual thickness of the zone that slips during the rupture of a large earthquake is not known and is a key seismological parameter in understanding energy dissipation, rupture processes and seismic efficiency. The 1999 magnitude-7.7 earthquake in Chi-Chi, Taiwan, produced large slip (8 to 10 metres) at or near the surface, which is accessible to borehole drilling and provides a rare opportunity to sample a fault that had large slip in a recent earthquake. Here we present the retrieved cores from the Taiwan Chelungpu-fault Drilling Project and identify the main slip zone associated with the Chi-Chi earthquake. The surface fracture energy estimated from grain sizes in the gouge zone of the fault sample was directly compared to the seismic fracture energy determined from near-field seismic data. From the comparison, the contribution of gouge surface energy to the earthquake breakdown work is quantified to be 6 per cent. PMID:17122854

  17. Appearance ratio of earthquake surface rupture - About scaling low for Japanese Intraplate Earthquakes -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Irikura, K.

    2013-12-01

    A study on appearance ratio of the surface rupture is considered on using historical earthquake (ex. Takemura, 1998), also Kagawa et al (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated appearance indicates a sigmoid curve and rise sharply between Mj (Japan Meteorological Agency magnitude) =6.5 and Mj=7.2. However, these historical earthquake record between Mj = 6.5 to 7.2 are very law, therefore some scientist consider that the appearance ratio might be jumped up discontinuity between Mj = 6.5 to 7.2. In this study, we used historical intraplate earthquakes that were occurred around Japan from 1981 Nobi to 2013. Especially, after Hyogoken Nanbu Earthquake, many earthquakes around Mj 6.5 to 7.2 were occurred. The result of this study indicate that the appearance ratio increase between Mj = 6.5 to 7.2 not discontinuity but like logistic curve. Youngs et al. (2003), Petersen et al. (2011) and Moss and Ross (2011) are discussed about appearance ratio of the surface rupture using historical earthquake in the world. Their discussion are based on Mw, therefore, we cannot compare each other because we used Mj. Takemura (1990) were proposed a conversion equation that is Mw = 0.78Mj+1.08. However, nowadays Central Disaster Prevention Council in Japan (2005) derive a conversion equation that is Mw = 0.879Mj+0.536 shown in a regression line demanded by a principal component analysis The result of this study, the appearance ratio increase sharply between Mw = 6.3 to 7.0.

  18. European Scale Earthquake Data Exchange: ORFEUS-EMSC Joint Initiatives

    NASA Astrophysics Data System (ADS)

    Bossu, R.; van Eck, T.

    2003-04-01

    The European-Mediterranean Seismological Centre (EMSC) and the Observatories and Research Facilities for European Seismology (ORFEUS) are both active international organisations with different co-ordinating roles within European seismology. Both are non-governmental non-profit organisations, which have members/participants in more than 30 countries in Europe and its surroundings. Although different, their activities are complementary with ORFEUS focusing on broadband waveform data archiving and dissemination and EMSC focusing on seismological parameter data. The main EMSC activities are the alert system for potentially damaging earthquakes, a real time seismicity web page, the production of the Euro-Med. seismological bulletin, and the creation and maintenance of databases related to seismic hazard. All these activities are based on data contributions from seismological Institutes. The EMSC is also involved in a UNESCO programme to promote seismology and data exchange in the Middle-East and Northern Africa. ORFEUS aims at co-ordinating and promoting digital broadband seismology in Europe. To accomplish this, it operates a Data Centre to archive and distribute high quality digital data for research, co-ordinates four working groups and provides services through the Internet. More recently through an EC-infrastructure project MEREDIAN it has accomplished added co-ordination of data exchange and archiving between large European national data centres and realised the Virtual European Broadband Seismograph Network (VEBSN). To accomplish higher efficiency and better services to the seismological community, ORFEUS and EMSC have been working towards a closer collaboration. Fruits of this collaboration are the joint EC project EMICES, a common Expression of Interest 'NERIES' submitted June 2002 to the EC , integration of the automatic picks from the VEBSN into the EMSC rapid alert system and collaboration on common web page developments. Presently, we collaborate in a

  19. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-08-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  20. A Regional Scale Earthquake Simulator for Faults With Rate- and State-Dependent Frictional Properties

    NASA Astrophysics Data System (ADS)

    Richards-Dinger, K.; Dieterich, J.

    2006-12-01

    Long-term (~10,000 year) catalogs of simulated earthquakes can be used to address a host of questions related to both seismic hazard calculations and more fundamental issues of earthquake occurrence and interaction (e.g. Ward [1996], Rundle et al. [2004], Ziv and Rubin [2000, 2003]). The quasi-static models of Ziv and Rubin [2000, 2003] are based on the computational strategy of Dieterich [1995] for efficiently computing large numbers of earthquakes, including the seismic nucleation process on faults with rate- and state-dependent frictional properties. Both Dieterich [1995] and Ziv and Rubin [2000, 2003] considered only single planar faults embedded in a whole-space. Faults in nature are not geometrically flat nor do they exist in isolation but form complex networks. Slip of such networks involves processes and interactions that do not occur in planar fault models and may strongly affect earthquake processes. We are in the process of constructing simulations of earthquake occurrence in complex, regional-scale fault networks whose elements obey rate- and state-dependent frictional laws. The solutions of Okada [1992] for dislocations in an elastic half-space are used to calculate the stress interaction coefficients between the elements. We employ analytic solutions for the nucleation process that include the effects of time-varying normal stress. At the time of this abstract we have conducted initial experiments with a single 100 km x 15 km strike-slip fault which produce power-law magnitude distributions with reasonable b-values. The model is computationally efficient - simulations of 50,000 events on a fault with 1500 elements require about seven minutes on a single 2.5 GHz CPU. The very largest events (which rupture nearly the entire fault) occur quasi-periodically, whereas the entire catalog displays temporal clustering in that its waiting time distribution is a power-law with a slope similar to that observed for actual seismicity in both California and Iceland

  1. Maximum Magnitude and Recurrence Interval for the Large Earthquakes in the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Hu, C.

    2012-12-01

    Maximum magnitude and recurrence interval of the large earthquakes are key parameters for seismic hazard assessment in the central and eastern United States. Determination of these two parameters is quite difficult in the region, however. For example, the estimated maximum magnitudes of the 1811-12 New Madrid sequence are in the range of M6.6 to M8.2, whereas the estimated recurrence intervals are in the range of about 500 to several thousand years. These large variations of maximum magnitude and recurrence interval for the large earthquakes lead to significant variation of estimated seismic hazards in the central and eastern United States. There are several approaches being used to estimate the magnitudes and recurrence intervals, such as historical intensity analysis, geodetic data analysis, and paleo-seismic investigation. We will discuss the approaches that are currently being used to estimate maximum magnitude and recurrence interval of the large earthquakes in the central United States.

  2. Analysis of Luminescence Away from the Epicenter During a Large Earthquake: The Pisco, Peru Mw8 Earthquake

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Lira, J. A.

    2011-12-01

    The Mw8.0 earthquake in Pisco, Peru of August 15, 2007, produced high damage with a toll of 513 people dead, 2,291 wounded, 76,000 houses and buildings seriously damaged and 431,000 people overall affected. Co-seismic luminescence was reported by thousands of people along the central coast of Peru and especially in Lima, 150 km from the epicenter, being this the first large nighttime earthquake in about 100 years in a highly populated area. Pictures and videos of the lights are available, however those obtained so far, had little information on the timing and direction of the reported lights. Two important videos are analyzed, the first one from a fixed security camera, in order to determine differential time correlation between the timing of the lights recorded with ground acceleration registered by a three-axis accelerometer 500m away and very good results have been observed. This evidence contains important color, shape and timing information which is shown to be highly differential time correlated with the arrival of the seismic waves. Furthermore, the origin of the lights is on the top of a hilly island about 6 km off the coast of Lima where lights were reported in a written chronicle, to have been seen exactly 21 days before the Mega earthquake of October 28, 1746. This was the largest ever to happen in Peru, and produced a Tsunami that washed the port of Callao and reached up to 5km inland. The second video, from another security camera, in a different location, has been further analyzed in order to determine more exactly the direction of the lights and this new evidence will be presented. The fact that a notoriously large and well documented co-seismic luminous phenomena was video recorded more than 150 km from the epicenter during a very large earthquake, is emphasized together with historical documented evidence of pre-seismic luminous activity on the same island, during a mega earthquake of enormous proportions in Lima. Both previously mentioned videos

  3. Giant seismites and megablock uplift in the East African Rift: Evidence for large magnitude Late Pleistocene earthquakes

    NASA Astrophysics Data System (ADS)

    Hilbert-Wolf, Hannah; Roberts, Eric

    2015-04-01

    Due to rapid population growth and urbanization of many parts of East Africa, it is increasingly important to quantify the risk and possible destruction from large-magnitude earthquakes along the tectonically active East African Rift System. However, because comprehensive instrumental seismic monitoring, historical records, and fault trench investigations are limited for this region, the sedimentary record provides important archives of seismicity in the form of preserved soft-sediment deformation features (seismites). Extensive, previously undescribed seismites of centimeter- to dekameter-scale were identified by our team in alluvial and lacustrine facies of the Late Quaternary-Recent Lake Beds Succession in the Rukwa Rift Basin, of the Western Branch of the East African Rift System. We document the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania, primarily exposed at two, correlative outcrop localities ~35 km apart. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced breccia megablocks, some in excess of 20 m-wide. The megablock complex is comprised of (1) a 5m-tall by 20m-wide injected body of volcanic ash and silt that hydraulically displaced (2) an equally sized, semi-consolidated, volcaniclastic megablock; both of which are intruded by (3) a clastic injection dyke. Evidence for breaching at the surface and for the fluidization of cobbles demonstrates the susceptibility of the substrate in this region to significant deformation via seismicity. Thirty-five km to the north, dekameter-scale asymmetrical/recumbent folds occur in a 3 m-thick, flat lying lake floor unit of the Lake Beds Succession. In between and surrounding these two unique sites, smaller-scale seismites are expressed, including flame structures; cm- to m-scale folded beds; ball-and-pillow structures; syn-sedimentary faults; sand injection features; and m-dkm-scale

  4. Seismic sequences, swarms, and large earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Amato, Alessandro; Piana Agostinetti, Nicola; Selvaggi, Giulio; Mele, Franco

    2016-04-01

    In recent years, particularly after the L'Aquila 2009 earthquake and the 2012 Emilia sequence, the issue of earthquake predictability has been at the center of the discussion in Italy, not only within the scientific community but also in the courtrooms and in the media. Among the noxious effects of the L'Aquila trial there was an increase of scaremongering and false alerts during earthquake sequences and swarms, culminated in a groundless one-night evacuation in northern Tuscany in 2013. We have analyzed the Italian seismicity of the last decades in order to determine the rate of seismic sequences and investigate some of their characters, including frequencies, min/max durations, maximum magnitudes, main shock timing, etc. Selecting only sequences with an equivalent magnitude of 3.5 or above, we find an average of 30 sequences/year. Although there is an extreme variability in the examined parameters, we could set some boundaries, useful to obtain some quantitative estimates of the ongoing activity. In addition, the historical catalogue is rich of complex sequences in which one main shock is followed, seconds, days or months later, by another event with similar or higher magnitude We also analysed the Italian CPT11 catalogue (Rovida et al., 2011) between 1950 and 2006 to highlight the foreshock-mainshock event couples that were suggested in previous studies to exist (e.g. six couples, Marzocchi and Zhuang, 2011). Moreover, to investigate the probability of having random foreshock-mainshock couples over the investigated period, we produced 1000 synthetic catalogues, randomly distributing in time the events occured in such period. Preliminary results indicate that: (1) all but one of the the so-called foreshock-mainshock pairs found in Marzocchi and Zhuang (2011) fall inside previously well-known and studied seismic sequences (Belice, Friuli and Umbria-Marche), meaning that suggested foreshocks are also aftershocks; and (2) due to the high-rate of the italian

  5. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  6. Constraining depth range of S wave velocity decrease after large earthquakes near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Wu, Chunquan; Delorey, Andrew; Brenguier, Florent; Hadziioannou, Celine; Daub, Eric G.; Johnson, Paul

    2016-06-01

    We use noise correlation and surface wave inversion to measure the S wave velocity changes at different depths near Parkfield, California, after the 2003 San Simeon and 2004 Parkfield earthquakes. We process continuous seismic recordings from 13 stations to obtain the noise cross-correlation functions and measure the Rayleigh wave phase velocity changes over six frequency bands. We then invert the Rayleigh wave phase velocity changes using a series of sensitivity kernels to obtain the S wave velocity changes at different depths. Our results indicate that the S wave velocity decreases caused by the San Simeon earthquake are relatively small (~0.02%) and access depths of at least 2.3 km. The S wave velocity decreases caused by the Parkfield earthquake are larger (~0.2%), and access depths of at least 1.2 km. Our observations can be best explained by material damage and healing resulting mainly from the dynamic stress perturbations of the two large earthquakes.

  7. Irregular recurrence of large earthquakes along the san andreas fault: evidence from trees.

    PubMed

    Jacoby, G C; Sheppard, P R; Sieh, K E

    1988-07-01

    Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the "San Juan Capistrano" earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics. PMID:17841050

  8. The characteristics of quasistatic electric field perturbations observed by DEMETER satellite before large earthquakes

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Shen, X.; Zhao, S.; Yao, Lu; Ouyang, X.; Qian, J.

    2014-01-01

    This paper presents new results after processing the ULF electric field (DC-15 Hz) observed by DEMETER satellite (h = 660-710 km). Typical perturbations were picked up in quasistatic electric field around some large earthquakes in 2010 at first. And then, 27 earthquakes were selected to be analyzed on quasistatic electric field in two seismic regions of Indonesia and Chile at equatorial and middle latitude area respectively. Three-component electric field data related to earthquakes were collected along all the up-orbits (in local nighttime) in a limited distance of 2000 km to the epicenters during 9 days with 7 days before and 1 day after those cases, and totally 57 perturbations were found around them. All the results show that the amplitude of quasistatic electric field perturbations varies from 1.5 to 16 mV/m in the upper ionosphere, mostly smaller than 10 mV/m. And the perturbations were mainly located just over the epicentral area or at the end of seismic faults constructed by a series of earthquakes where electromagnetic emissions may be easily formed during preparation and development processes of seismic sequences. Among all 27 cases, there are 10 earthquakes with perturbations occurring just one day before, which demonstrates the close correlation in time domain between quasistatic electric field in ionosphere and large earthquakes. Finally, combined with in situ observation of plasma parameters, the coupling mechanism of quasistatic electric field in different earth spheres was discussed.

  9. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site

    SciTech Connect

    Chang, C.Y.; Mok, C.M.; Power, M.S. )

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M{sub L} 4.5 to M{sub L} 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes.

  10. An earthquake in Japan caused large waves in Norwegian fjords

    NASA Astrophysics Data System (ADS)

    Schult, Colin

    2013-08-01

    Early on a winter morning a few years ago, many residents of western Norway who lived or worked along the shores of the nation's fjords were startled to see the calm morning waters suddenly begin to rise and fall. Starting at around 7:15 A.M. local time and continuing for nearly 3 hours, waves up to 1.5 meters high coursed through the previously still fjord waters. The scene was captured by security cameras and by people with cell phones, reported to local media, and investigated by a local newspaper. Drawing on this footage, and using a computational model and observations from a nearby seismic station, Bondevik et al. identified the cause of the waves—the powerful magnitude 9.0 Tohoku earthquake that hit off the coast of Japan half an hour earlier.

  11. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  12. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  13. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  14. Variation of the scaling characteristics of temporal and spatial distribution of earthquakes in Caucasus

    NASA Astrophysics Data System (ADS)

    Matcharashvili, T.; Chelidze, T.; Javakhishvili, Z.; Zhukova, N.

    2016-05-01

    In the present study we investigated the character of variation of long-range correlations features in earthquakes' temporal and spatial distribution in Caucasus. Scaling exponents of data sets of interearthquakes time intervals (waiting times) and interearthquakes distances were calculated by method of Detrended Fluctuation Analysis (DFA). Scaling exponent values were calculated for time windows with consecutive 500 data as well as for 5 year long sliding windows. It was shown that scaling exponents calculated for different windows vary in a wide range indicating variable behavior from antipersistent to persistent type. In the overwhelming majority of cases scaling exponents manifest persistent behavior both in the earthquakes time and spatial distributions. Close to 0.5 and antipersistent scaling exponents were obtained for the time periods when the strongest regional earthquakes occurred. We observed slow trend in long-range correlation features variation for the considered time period.

  15. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  16. Large LOCA-earthquake event combination probability assessment - Load Combination Program Project I summary report

    SciTech Connect

    Lu, S.; Streit, R.D.; Chou, C.K.

    1980-12-10

    This report summarizes work performed to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nucelar power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10/sup -12/). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture.

  17. Typical Scenario of Preparation, Implementation, and Aftershock Sequence of a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, Mikhail

    2016-04-01

    We have tried here to construct and examine the typical scenario of a large earthquake occurrence. The Harvard seismic moment GCMT catalog was used to construct the large earthquake generalized space-time vicinity (LEGV) and to investigate the seismicity behavior in LEGV. LEGV was composed of earthquakes falling into the zone of influence of any of the considerable number (100, 300, or 1,000) of largest earthquakes. The LEGV construction is aimed to enlarge the available statistics, diminish a strong random component, and to reveal in result the typical features of pre- and post-shock seismic activity in more detail. In result of the LEGV construction the character of fore- and aftershock cascades was examined in more detail than it was possible without of the use of the LEGV approach. It was shown also that the mean earthquake magnitude tends to increase, and the b-values, mean mb/mw ratios, apparent stress values, and mean depth tend to decrease. Amplitudes of all these anomalies increase with an approach to a moment of the generalized large earthquake (GLE) as a logarithm of time interval from GLE occurrence. Most of the discussed anomalies agree well with a common scenario of development of instability. Besides of such precursors of common character, one earthquake-specific precursor was found. The revealed decrease of mean earthquake depth during large earthquake preparation testifies probably for the deep fluid involvement in the process. The revealed in LEGV typical features of development of shear instability agree well with results obtained in laboratory acoustic emission (AE) study. Majority of the revealed anomalies appear to have a secondary character and are connected mainly with an increase in a mean earthquake magnitude in LEGV. The mean magnitude increase was shown to be connected mainly with a decrease of a portion of moderate size events (Mw 5.0 - 5.5) in a closer GLE vicinity. We believe that this deficit of moderate size events hardly can be

  18. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  19. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  20. W phase source inversion using high-rate regional GPS data for large earthquakes

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F.; Melgar, D.; Benavente, R.; Geng, J.; Barrientos, S.; Campos, J.

    2016-04-01

    W phase moment tensor inversion has proven to be a reliable method for rapid characterization of large earthquakes. For global purposes it is used at the United States Geological Survey, Pacific Tsunami Warning Center, and Institut de Physique du Globe de Strasbourg. These implementations provide moment tensors within 30-60 min after the origin time of moderate and large worldwide earthquakes. Currently, the method relies on broadband seismometers, which clip in the near field. To ameliorate this, we extend the algorithm to regional records from high-rate GPS data and retrospectively apply it to six large earthquakes that occurred in the past 5 years in areas with relatively dense station coverage. These events show that the solutions could potentially be available 4-5 min from origin time. Continuously improving GPS station availability and real-time positioning solutions will provide significant enhancements to the algorithm.

  1. Post-earthquake analysis and data correlation for the 1/4-scale containment model of the Lotung experiment

    SciTech Connect

    Tseng, W.S.; Lihanand, K.; Ostadan, F.; Tuann, S.Y. )

    1991-10-01

    This report presents the results of post-prediction earthquake response data analyses performed to identify the test system parameters for the 1/4-scale containment model of the Large-Scale Seismic Test (LSST) in Lotung, Taiwan and the results of post- prediction analytical earthquake parametric studies conducted to evaluate the applicabilities of four soil-structure interaction (SSI) analysis methods which have frequently been applied in the US nuclear industry. These four methods evaluated were: (1) the soil-spring method; (2) the CLASSI continuum halfspace substructuring method; (3) the SASSI finite element substructuring method; and (4) the FLUSH finite element direct method. Earthquake response data recorded on the containment and internal structure (steam generator and piping) for four earthquake events (LSST06, LSST07, LSST12, and LSST16) having peak ground accelerations ranging from 0.04 g to 0.21 g have been analyzed. The containment SSI system and the internal structure system frequencies and associated modal damping ratios consistent with ground shaking intensity of each event were identified. These results along with the site soil parameters identified from separate free-field soil response data analyses were used as the basis for refining the blind-prediction SSI analysis models for each of the four analysis methods evaluated. 12 refs., 5 figs.

  2. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  3. Large Subduction Earthquakes along the fossil MOHO in Alpine Corsica: what was the role of fluids?

    NASA Astrophysics Data System (ADS)

    Andersen, Torgeir B.; Deseta, Natalie; Silkoset, Petter; Austrheim, Håkon; Ashwal, Lewis D.

    2014-05-01

    Intermediate depth subduction earthquakes abruptly release vast amounts of energy to crust and mantle lithosphere. The products of such drastic deformation events can only rarely be observed in the field because they are mostly permanently lost by the subduction. We present new observations of deformation products formed by large fossil subduction earthquakes in Alpine Corsica. These are formed by a few very large and numerous small intermediate-depth earthquakes along the exhumed palaeo-Moho in the Alpine Liguro-Piemontese basin, which together with the 'schistes-lustrés complex' experienced blueschist- to lawsonite-eclogite facies metamorphism during the Alpine subduction. The abrupt release of energy resulted in localized shear heating that completely melted both gabbro and peridotite along the Moho. The large volumes of melts that were generated by at most a few very large earthquakes along the Moho can be studied in the fault- and injection vein breccia complex that is preserved in a segment along the Moho fault. The energy required for wholesale melting of a large volume of peridotite pr. m2 fault plane, combined with estimates of stress-drops show that a few large earthquakes took place along the Moho of the subducting plate. Since these fault rocks represent intra-plate seismicity we suggest they formed along the lower seismogenic zone by analogy with present-day subduction. As demonstrated in previous work (detailed petrography and EBSD) by our research team, there is no evidence for prograde dehydration reactions leading up to the co-seismic slip events. Instead we show that local crystal-plastic deformation in olivine and shear heating was more significant for the run-away co-seismic failure than a solid-state dehydration reaction weakening. We therefore disregard dehydration embrittlement as a weakening mechanism for these events, and suggest that shear heating may be the most important weakening mechanism for intermediate depth earthquakes.

  4. Coseismic and postseismic wave velocity changes caused by large crustal earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Wegler, Ulrich; Shiomi, Katsuhiko; Nakahara, Hisashi

    2014-05-01

    Using Passive Image Interferometry (PII), we analyzed coseismic and postseismic changes of seismic wave velocities caused by the following earthquakes which occurred in Japan between 2004 and 2011: The 2005 Fukuoka (MW6.6), 2007 Noto Hant¯o (MW6.6) and 2008 Iwate-Miyagi Nairiku (MW6.9) earthquakes, three earthquakes in Niigata Prefecture (2004 Mid-Niigata, MW6.8; 2007 Ch¯u etsu Offshore, MW6.6; 2011 Nagano/Niigata, MW6.2), as well as the 2011 Tohoku earthquake (MW9.0) in the four regions of the other earthquakes. The time series of ambient noise used for the different earthquakes spanned from at least half a year before the respective earthquake until three months after the Tohoku earthquake. Cross-correlations and single-station cross-correlations of several years of ambient seismic noise, which was recorded mainly by Hi-net sensors in the surrounding areas of the respective earthquakes, are calculated in different frequency ranges between 0.125 and 4.0 Hz. Between 10 and 20 seismometers were used in the different areas. The cross-correlations are calculated for all possible station pairs. Using a simple tomography algorithm, the resulting velocity variations can be reprojected on the actual station locations. The cross-correlation and single-station cross-correlation techniques give compatible results, the former giving more reliable results for frequencies below 0.5 Hz, the latter for higher frequencies. Our analysis yields significant coseismic velocity drops for all analyzed earthquakes, which are strongest close to the fault zones and exceed 1 % for some stations. The coseismic velocity drops are larger at higher frequencies and recover on a time scale of several years, but the coseismic velocity drops do not completely recover during our observation time. Velocity drops are also visible in all areas at the time of the Tohoku earthquake. Furthermore, we measured seasonal velocity variations of the order of 0.1 % in all areas which are, at least for

  5. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  6. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  7. Observations of large earthquakes in the Mexican subduction zone over 110 years

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, Vala; Krishna Singh, Shri; Martínez-Peláez, Liliana; Garza-Girón, Ricardo; Lund, Björn; Ji, Chen

    2016-04-01

    Fault slip during an earthquake is observed to be highly heterogeneous, with areas of large slip interspersed with areas of smaller or even no slip. The cause of the heterogeneity is debated. One hypothesis is that the frictional properties on the fault are heterogeneous. The parts of the rupture surface that have large slip during earthquakes are coupled more strongly, whereas the areas in between and around creep continuously or episodically. The continuously or episodically creeping areas can partly release strain energy through aseismic slip during the interseismic period, resulting in relatively lower prestress than on the coupled areas. This would lead to subsequent earthquakes having large slip in the same place, or persistent asperities. A second hypothesis is that in the absence of creeping sections, the prestress is governed mainly by the accumulative stress change associated with previous earthquakes. Assuming homogeneous frictional properties on the fault, a larger prestress results in larger slip, i.e. the next earthquake may have large slip where there was little or no slip in the previous earthquake, which translates to non-persistent asperities. The study of earthquake cycles are hampered by short time period for which high quality, broadband seismological and accelerographic records, needed for detailed studies of slip distributions, are available. The earthquake cycle in the Mexican subduction zone is relatively short, with about 30 years between large events in many places. We are therefore entering a period for which we have good records for two subsequent events occurring in the same segment of the subduction zone. In this study we compare seismograms recorded either at the Wiechert seismograph or on a modern broadband seismometer located in Uppsala, Sweden for subsequent earthquakes in the Mexican subduction zone rupturing the same patch. The Wiechert seismograph is unique in the sense that it recorded continuously for more than 80 years

  8. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  9. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  10. Precursory measure of interoccurrence time associated with large earthquakes in the Burridge-Knopoff model

    SciTech Connect

    Hasumi, Tomohiro

    2008-11-13

    We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

  11. Basin-scale transport of heat and fluid induced by earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Yuen; Wang, Lee-Ping; Manga, Michael; Wang, Chung-Ho; Chen, Chieh-Hung

    2013-08-01

    Large earthquakes are known to cause widespread changes in groundwater flow, yet their relation to subsurface transport is unknown. Here we report systematic changes in groundwater temperature after the 1999 Mw7.6 Chi-Chi earthquake in central Taiwan, documented by a dense network of monitoring wells over a large (17,000 km2) alluvial fan near the epicenter. Analysis of the data reveals a hitherto unknown system of earthquake-triggered basin-wide groundwater flow, which scavenges geothermal heat from depths, changing groundwater temperature across the basin. The newly identified earthquake-triggered groundwater flow may have significant implications on postseismic groundwater supply and quality, contaminant transport, underground repository safety, and hydrocarbon production.

  12. Relay chatter and operator response after a large earthquake: An improved PRA methodology with case studies

    SciTech Connect

    Budnitz, R.J.; Lambert, H.E.; Hill, E.E.

    1987-08-01

    The purpose of this project has been to develop and demonstrate improvements in the PRA methodology used for analyzing earthquake-induced accidents at nuclear power reactors. Specifically, the project addresses methodological weaknesses in the PRA systems analysis used for studying post-earthquake relay chatter and for quantifying human response under high stress. An improved PRA methodology for relay-chatter analysis is developed, and its use is demonstrated through analysis of the Zion-1 and LaSalle-2 reactors as case studies. This demonstration analysis is intended to show that the methodology can be applied in actual cases, and the numerical values of core-damage frequency are not realistic. The analysis relies on SSMRP-based methodologies and data bases. For both Zion-1 and LaSalle-2, assuming that loss of offsite power (LOSP) occurs after a large earthquake and that there are no operator recovery actions, the analysis finds very many combinations (Boolean minimal cut sets) involving chatter of three or four relays and/or pressure switch contacts. The analysis finds that the number of min-cut-set combinations is so large that there is a very high likelihood (of the order of unity) that at least one combination will occur after earthquake-caused LOSP. This conclusion depends in detail on the fragility curves and response assumptions used for chatter. Core-damage frequencies are calculated, but they are probably pessimistic because assuming zero credit for operator recovery is pessimistic. The project has also developed an improved PRA methodology for quantifying operator error under high-stress conditions such as after a large earthquake. Single-operator and multiple-operator error rates are developed, and a case study involving an 8-step procedure (establishing feed-and-bleed in a PWR after an earthquake-initiated accident) is used to demonstrate the methodology.

  13. Rapid Reoccurrence of Large Earthquakes due to Depth Segmentation of the Seismogenic Crust

    NASA Astrophysics Data System (ADS)

    Elliott, J. R.; Parsons, B. E.; Jackson, J. A.; Shan, X.; Sloan, R.; Walker, R. T.

    2010-12-01

    The Mw 6.3 November 2008 and Mw 6.3 August 2009 thrust-fault earthquakes occurred in almost the same location within the North Qaidam thrust system, south of the Qilian Shan/Nan Shan thrust belt and on the northern margin of the Qaidam basin, NE Tibet. This fold-and-thrust belt is the result of the ongoing northward convergence of India with Eurasia, with the rate of NE-SW convergence across it of approximately 10 mm/yr. We measured the coseismic displacements due to each earthquake by constructing radar interferograms using a combination of SAR ENVISAT acquisitions spanning each event separately. For each earthquake, we utilised two look directions on ascending and descending satellite passes, and derived fault and slip models using both look directions simultaneously. The models suggest that the two earthquakes occurred on a near coplanar fault that was segmented in depth, resulting in the arrested rupture of the initial deeper segment of the fault, and only allowing the failure of the upper portion of the crust ten months later. The depth at which the segmentation occurs is approximately coincident with the intersection of the down-dip projection of a range-bounding thrust fault. This suggests that where either an interacting fault geometry or lithological properties allow only part of the seismogenic layer to rupture, the occurrence of a large earthquake does not necessarily result in a reduction of the immediate seismic hazard. Such a geometry may have prevented the failure of the lower part of the seismogenic layer during the 2003 Bam earthquake (Jackson et al., 2006), representing a continuing seismic hazard despite the occurrence of the earthquake.

  14. Complex Nucleation Process of Large North Chile Earthquakes, Implications for Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Meneses, G.; Sobiesiak, M.; Madariaga, R. I.

    2014-12-01

    We studied the nucleation process of Northern Chile events that included the large earthquakes of Tocopilla 2007 Mw 7.8 and Iquique 2014 Mw 8.1, as well as the background seismicity recorded from 2011 to 2013 by the ILN temporary network and the IPOC and CSN permanent networks. We built our catalogue of 393 events starting from the CSN catalogue, which has a completeness of magnitude Mw > 3.0 in Northern Chile. We re-located and computed moment magnitude for each event. We also computed Early Warning (EW) parameters - Pd, Pv, τc and IV2 - for each event including 13 earthquakes of Mw>6.0 that occurred between 2007-2012. We also included part of the seismicity from March-April 2014 period. We find that Pd, Pv and IV2 are good estimators of magnitude for interplate thrust and intraplate intermediate depth events with Mw between 4.0 and 6.0. However, the larger magnitude events show a saturation of the EW parameters. The Tocopilla 2007 and Iquique 2014 earthquake sequences were studied in detail. Almost all events with Mw>6.0 present precursory signals so that the largest amplitudes occur several seconds after the first P wave arrival. The recent Mw 8.1 Iquique 2014 earthquake was preceded by low amplitude P waves for 20 s before the main asperity was broken. The magnitude estimation can improve if we consider longer P wave windows in the estimation of EW parameters. There was, however, a practical limit during the Iquique earthquake because the first S waves arrived before the arrival of the P waves from the main rupture. The 4 s P-wave Pd parameter estimated Mw 7.1 for the Mw 8.1 Iquique 2014 earthquake and Mw 7.5 for the Mw 7.8 Tocopilla 2007 earthquake.

  15. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  16. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  17. Active structural growth in central Taiwan in relationship to large earthquakes and pore-fluid pressures

    NASA Astrophysics Data System (ADS)

    Yue, Li-Fan

    Central Taiwan is subject to a substantial long-term earthquake risk with a population of five million and two disastrous earthquakes in the last century, the 1935 ML=7.1 Tuntzuchiao and 1999 Mw=7.6 Chi-Chi earthquakes. Rich data from these earthquakes combined with substantial surface and subsurface data accumulated from petroleum exploration form the basis for these studies of the growth of structures in successive large earthquakes and their relationships to pore-fluid pressures. Chapter 1 documents the structural context of the bedding-parallel Chelungpu thrust that slipped in the Chi-Chi earthquake by showing for this richly instrumented earthquake the close geometric relationships between the complex 3D fault shape and the heterogeneous coseismic displacements constrained by geodesy and seismology. Chapter 2 studies the accumulation of deformation by successive large earthquakes by studying the deformation of flights of fluvial terraces deposited over the Chelungpu and adjacent Changhua thrusts, showing the deformation on a timescale of tens of thousands of years. Furthermore these two structures, involving the same stratigraphic sequence, show fundamentally different kinematics of deformation with associated contrasting hanging-wall structural geometries. The heights and shapes of deformed terraces allowed testing of existing theories of fault-related folding. Furthermore terrace dating constrains a combined shortening rate of 37 mm/yr, which is 45% of the total Taiwan plate-tectonic rate, and indicates a substantial earthquake risk for the Changhua thrust. Chapter 3 addresses the long-standing problem of the mechanics of long-thing thrust sheets, such as the Chelungpu and Changhua thrusts in western Taiwan, by presenting a natural test for the classic Hubbert-Rubey hypothesis, which argues that ambient excess pore-fluid pressure substantially reduces the effective fault friction allowing the thrusts to move. Pore-fluid pressure data obtained from 76 wells

  18. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  19. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  20. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  1. Detection of large prehistoric earthquakes in the pacific northwest by microfossil analysis.

    PubMed

    Mathewes, R W; Clague, J J

    1994-04-29

    Geologic and palynological evidence for rapid sea level change approximately 3400 and approximately 2000 carbon-14 years ago (3600 and 1900 calendar years ago) has been found at sites up to 110 kilometers apart in southwestern British Columbia. Submergence on southern Vancouver Island and slight emergence on the mainland during the older event are consistent with a great (magnitude M >/= 8) earthquake on the Cascadia subduction zone. The younger event is characterized by submergence throughout the region and may also record a plate-boundary earthquake or a very large crustal or intraplate earthquake. Microfossil analysis can detect small amounts of coseismic uplift and subsidence that leave little or no lithostratigraphic signature. PMID:17737954

  2. One-Way Markov Process Approach to Repeat Times of Large Earthquakes in Faults

    NASA Astrophysics Data System (ADS)

    Tejedor, Alejandro; Gomez, Javier B.; Pacheco, Amalio F.

    2012-11-01

    One of the uses of Markov Chains is the simulation of the seismic cycle in a fault, i.e. as a renewal model for the repetition of its characteristic earthquakes. This representation is consistent with Reid's elastic rebound theory. We propose a general one-way Markovian model in which the waiting time distribution, its first moments, coefficient of variation, and functions of error and alarm (related to the predictability of the model) can be obtained analytically. The fact that in any one-way Markov cycle the coefficient of variation of the corresponding distribution of cycle lengths is always lower than one concurs with observations of large earthquakes in seismic faults. The waiting time distribution of one of the limits of this model is the negative binomial distribution; as an application, we use it to fit the Parkfield earthquake series in the San Andreas fault, California.

  3. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  4. Particle precipitation prior to large earthquakes of both the Sumatra and Philippine Regions: A statistical analysis

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano

    2015-12-01

    A study of statistical correlation between low L-shell electrons precipitating into the atmosphere and strong earthquakes is presented. More than 11 years of the Medium Energy Protons Electrons Detector data from the NOAA-15 Sun-synchronous polar orbiting satellite were analysed. Electron fluxes were analysed using a set of adiabatic coordinates. From this, significant electron counting rate fluctuations were evidenced during geomagnetic quiet periods. Electron counting rates were compared to earthquakes by defining a seismic event L-shell obtained radially projecting the epicentre geographical positions to a given altitude towards the zenith. Counting rates were grouped in every satellite semi-orbit together with strong seismic events and these were chosen with the L-shell coordinates close to each other. NOAA-15 electron data from July 1998 to December 2011 were compared for nearly 1800 earthquakes with magnitudes larger than or equal to 6, occurring worldwide. When considering 30-100 keV precipitating electrons detected by the vertical NOAA-15 telescope and earthquake epicentre projections at altitudes greater that 1300 km, a significant correlation appeared where a 2-3 h electron precipitation was detected prior to large events in the Sumatra and Philippine Regions. This was in physical agreement with different correlation times obtained from past studies that considered particles with greater energies. The Discussion below of satellite orbits and detectors is useful for future satellite missions for earthquake mitigation.

  5. Basin-scale transport of heat and fluid induced by earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, C.; Manga, M.; Wang, L.; Chen, C.

    2013-12-01

    Large earthquakes are known to cause widespread changes in groundwater flow at distances thousands of kilometers away from the epicenter, yet their relation to subsurface transport is unknown. Since groundwater flow is effective in transporting subsurface heat, studies of earthquake-induced changes in groundwater temperature may be useful for better understanding earthquake-induced heat transport. Here we report systematic changes in groundwater temperature after the 1999 Mw 7.6 Chi-Chi earthquake in central Taiwan, recorded by a dense network of monitoring wells over a large (1,800 km2) alluvial fan near the epicenter. The data documented a clear trend of increase from negative changes (temperature decrease) near the upper rim of the fan near the ruptured fault to positive changes (temperature increase) near the coast. Analysis of the data reveals a hitherto unknown system of earthquake-triggered basin-wide groundwater flow, which scavenges geothermal heat from depths, changing groundwater temperature across the basin. The newly identified earthquake-triggered groundwater flow may have significant implications on post-seismic groundwater supply and quality, contaminant transport, underground repository safety, and hydrocarbon production.

  6. Evidence for earthquake triggering of large landslides in coastal Oregon, USA

    USGS Publications Warehouse

    Schulz, W.H.; Galloway, S.L.; Higgins, J.D.

    2012-01-01

    Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement rates, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great earthquakes every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or earthquake-induced ground motion using field observations and modeling of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement rates suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia earthquake was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support earthquake ground motion as their triggering mechanism. Limit-equilibrium slope-stability modeling suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. Modeling suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone earthquake would trigger formation of the rockslides. Displacement modeling following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff

  7. “PLAFKER RULE OF THUMB” RELOADED: EXPERIMENTAL INSIGHTS INTO THE SCALING AND VARIABILITY OF LOCAL TSUNAMIS TRIGGERED BY GREAT SUBDUCTION MEGATHRUST EARTHQUAKES

    NASA Astrophysics Data System (ADS)

    Rosenau, M.; Nerlich, R.; Brune, S.; Oncken, O.

    2009-12-01

    along accretionary margins. Three out of the top-five tsunami hotspots we identify had giant earthquakes in the last decades (Chile 1960, Alaska 1964, Sumatra-Andaman 2004) and one (Sumatra-Mentawai) started in 2005 releasing strain in a possibly moderate mode of sequential large earthquakes. This leaves Cascadia as the major active tsunami hotspot in the focus of tsunami hazard assessment. Visualization of preliminary versions of the experimentally-derived scaling laws for peak nearshore tsunami heigth (PNTH) as functions of forearc slope, peak earthquake slip (left panel) and moment magnitude (right panel). Note that wave breaking is not considered yet. This renders the extrem peaks > 20 m unrealistic.

  8. Long-period ocean-bottom motions in the source areas of large subduction earthquakes.

    PubMed

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  9. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    PubMed Central

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10–20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  10. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  11. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  12. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  13. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  14. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  15. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  16. How Large can Mexican Subduction Earthquakes be? Evidence of a Very Large Event in 1787 (M~8.5)

    NASA Astrophysics Data System (ADS)

    Suarez, G.

    2007-05-01

    A sequence of very strong earthquakes occurred from 28 March to 18 April, 1787. The first earthquake on 28 March, appears to be the largest of the sequence followed by three strong events on 29 and 30 March, and 3 April; strong aftershocks continued to be reported until 18 April. The event of 28 March was strongly felt and caused damage in Mexico City, where several buildings were reported to suffer. The strongest effects, however, were observed on the southeastern coast of Guerrero and Oaxaca. Intensities greater than 8 (MMI) were observed along the coast over a distance of about 400 km. The towns of Ometepec, Jamiltepec and Tehuantepec reported strong damage to local churches and other apparently well-constructed buildings. In contrast to the low intensities observed during the coastal Oaxaca earthquakes of 1965, 1968 and 1978, Oaxaca City reports damage equivalent to intensity 8 to 9 on 28 March, 1787. An unusual effect of this earthquake on the Mexican subduction zone was the presence of a very large tsunami. Three different sources report that in the area known as the Barra de Alotengo (16.2N, 98.2 W) the sea retreated for a distance of about one Spanish league (4.1 km). A large wave came back and invaded land for approximately 1.5 leagues (6.2 km). Several local ranchers were swept away by the coming wave. Along the coast near the town of Tehuantepec, about 400 km to the southeast of Alotengo a tsunami was also reported to have stranded fish and shellfish inland; in this case no description of the distance penetrated by the tsunami is reported. It is also described that in Acapulco, some 200 km to the northwest of Alotengo, a strong wave was observed and that the sea remained agitated for a whole day. Assumming that the subduction zone ruptured from somewhere near Alotengo to the coast Tehuantepec, the resulting fault lenght is about 400 to 450 km. This large fault rupture contrasts with the seismic cycle of the Oaxaca coast observed during this century where

  17. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide

    USGS Publications Warehouse

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland

    2012-01-01

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  18. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  19. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide.

    PubMed

    Pollitz, Fred F; Stein, Ross S; Sevilgen, Volkan; Bürgmann, Roland

    2012-10-11

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days, but so far remote aftershocks of moment magnitude M ≥ 5.5 have not been identified, with the lone exception of an M = 6.9 quake remotely triggered by the surface waves from an M = 6.6 quake 4,800 kilometres away. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M ≥ 5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M ≤ 7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10(-7) for at least 100 seconds during dynamic-wave passage. The other M ≥ 8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M ≥ 5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure. PMID:23023131

  20. Spectral Decay Characteristics in High Frequency Range of Observed Records from Crustal Large Earthquakes (Part 2)

    NASA Astrophysics Data System (ADS)

    Tsurugi, M.; Kagawa, T.; Irikura, K.

    2012-12-01

    Spectral decay characteristics in high frequency range of observed records from crustal large earthquakes occurred in Japan is examined. It is very important to make spectral decay characteristics clear in high frequency range for strong ground motion prediction in engineering purpose. The authors examined spectral decay characteristics in high frequency range of observed records among three events, the 2003 Miyagi-Ken Hokubu earthquake (Mw 6.1), the 2005 Fukuoka-Ken Seiho-oki earthquake (Mw 6.6), and the 2008 Iwate-Miyagi Nairiku earthquake (Mw 6.9) in previous study [Tsurugi et al.(2010)]. Target earthquakes in this study are two events shown below. *EQ No.1 Origin time: 2011/04/11 17:16, Location of hypocenter: East of Fukushima pref., Mj: 7.0, Mw: 6.6, Fault type: Normal fault *EQ No.2 Origin time: 2011/03/15 22:31, Location of hypocenter: East of Shizuoka pref., Mj: 6.4, Mw: 5.9, Fault type: Strike slip fault The borehole data of each event are used in the analysis. The Butterworth type high-cut filter with cut-off frequency, fmax and its power coefficient of high-frequency decay, s [Boore(1983)], are assumed to express the high-cut frequency characteristics of ground motions. The four parameters such as seismic moment, corner frequency, cut-off frequency and its power coefficient of high-frequency decay are estimated by comparing observed spectra at rock sites with theoretical spectra. The theoretical spectra are calculated based on the omega squared source characteristics convolved with propagation-path effects and high-cut filter shapes. In result, the fmax's of the records from the earthquakes are estimated 8.0Hz for EQ No.1 and 8.5Hz for EQ No.2. These values are almost same with those of other large crustal earthquakes occurred in Japan. The power coefficient, s, are estimated 0.78 for EQ No.1 and 1.65 for EQ No.2. The value for EQ No.2 is notably larger than those of other large crustal earthquakes. It is seems that the value of the power coefficient, s

  1. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  2. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  3. Bibliographical search for reliable seismic moments of large earthquakes during 1900-1979 to compute MW in the ISC-GEM Global Instrumental Reference Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Lee, William H. K.; Engdahl, E. Robert

    2015-02-01

    Moment magnitude (MW) determinations from the online GCMT Catalogue of seismic moment tensor solutions (GCMT Catalog, 2011) have provided the bulk of MW values in the ISC-GEM Global Instrumental Reference Earthquake Catalogue (1900-2009) for almost all moderate-to-large earthquakes occurring after 1975. This paper describes an effort to determine MW of large earthquakes that occurred prior to the start of the digital seismograph era, based on credible assessments of thousands of seismic moment (M0) values published in the scientific literature by hundreds of individual authors. MW computed from the published M0 values (for a time period more than twice that of the digital era) are preferable to proxy MW values, especially for earthquakes with MW greater than about 8.5, for which MS is known to be underestimated or "saturated". After examining 1,123 papers, we compile a database of seismic moments and related information for 1,003 earthquakes with published M0 values, of which 967 were included in the ISC-GEM Catalogue. The remaining 36 earthquakes were not included in the Catalogue due to difficulties in their relocation because of inadequate arrival time information. However, 5 of these earthquakes with bibliographic M0 (and thus MW) are included in the Catalogue's Appendix. A search for reliable seismic moments was not successful for earthquakes prior to 1904. For each of the 967 earthquakes a "preferred" seismic moment value (if there is more than one) was selected and its uncertainty was estimated according to the data and method used. We used the IASPEI formula (IASPEI, 2005) to compute direct moment magnitudes (MW[M0]) based on the seismic moments (M0), and assigned their errors based on the uncertainties of M0. From 1900 to 1979, there are 129 great or near great earthquakes (MW ⩾ 7.75) - the bibliographic search provided direct MW values for 86 of these events (or 67%), the GCMT Catalog provided direct MW values for 8 events (or 6%), and the remaining 35

  4. Scaling of Seismic Moment with Recurrence Interval for Small Repeating Earthquakes Simulated on Rate-and-State Faults

    NASA Astrophysics Data System (ADS)

    Chen, T.; Lapusta, N.

    2006-12-01

    Observations suggest that the recurrence time T and seismic moment M0 of small repeating earthquakes in Parkfield scale as T∝ M_0^{0.17 (Nadeau and Johnson, 1998). However, a simple conceptual model of these earthquakes as circular ruptures with stress drop independent of the seismic moment and slip that is proportional to the recurrence time T results in T∝ M_0^{1/3}. Several explanations for this discrepancy have been proposed. Nadeau and Johnson (1998) suggested that stress drop depends on the seismic moment and is much higher for small events than typical estimates based on seismic spectra. Sammis and Rice (2001) modeled repeating earthquakes at a border between large locked and creeping patches to get T∝ M_0^{1/6} and reasonable stress drops. Beeler et al. (2001) considered a fixed-area patch governed by a conceptual law that incorporated strain-hardening and showed that aseismic slip on the patch can explain the observed scaling relation. In this study, we provide an alternative physical basis, grounded in laboratory-derived rate and state friction laws, for the idea of Beeler at el. (2001) that much of the overall slip at the places of small repeating earthquakes may be accumulated aseismically. We simulate repeating events in a 3D model of a strike-slip fault imbedded into an elastic space and governed by rate and state friction laws. The fault has a small circular patch (2-20 m in diameter) with steady-state rate-weakening properties, with the rest of the fault governed by steady-state rate strengthening. The simulated fault segment is 40 m by 40 m, with periodic boundary conditions. We use values of rate and state parameters typical of laboratory experiments, with characteristic slip of order several microns. The model incorporates tectonic-like loading equivalent to the plate rate of 23 mm/year and all dynamic effects during unstable sliding. Our simulations use the 3D methodology of Liu and Lapusta (AGU, 2005) and fully resolve all aspects of

  5. Finding the Shadows: Local Variations in the Stress Field due to Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Latimer, C.; Tiampo, K.; Rundle, J.

    2009-05-01

    Stress shadows, regions of static stress decrease associated with large magnitude earthquake have typically been described through several characteristics or parameters such as location, duration, and size. These features can provide information about the physics of the earthquake itself, as static stress changes are dependent on the following parameters: the regional stress orientations, the coefficient of friction, as well as the depth of interest (King et al, 1994). Areas of stress decrease, associated with a decrease in the seismicity rate, while potentially stable in nature, have been difficult to identify in regions of high rates of background seismicity (Felzer and Brodsky, 2005; Hardebeck et al., 1998). In order to obtain information about these stress shadows, we can determine their characteristics by using the Pattern Informatics (PI) method (Tiampo et al., 2002; Tiampo et al., 2006). The PI method is an objective measure of seismicity rate changes that can be used to locate areas of increases and/or decreases relative to the regional background rate. The latter defines the stress shadows for the earthquake of interest, as seismicity rate changes and stress changes are related (Dieterich et al., 1992; Tiampo et al., 2006). Using the data from the PI method, we can invert for the parameters of the modeled half-space using a genetic algorithm inversion technique. Stress changes will be calculated using coulomb stress change theory (King et al., 1994) and the Coulomb 3 program is used as the forward model (Lin and Stein, 2004; Toda et al., 2005). Changes in the regional stress orientation (using PI results from before and after the earthquake) are of the greatest interest as it is the main factor controlling the pattern of the coulomb stress changes resulting from any given earthquake. Changes in the orientation can lead to conclusions about the local stress field around the earthquake and fault. The depth of interest and the coefficient of friction both

  6. The Scaling of the Slip Weakening Distance (Dc) With Final Slip During Dynamic Earthquake Rupture

    NASA Astrophysics Data System (ADS)

    Tinti, E.; Fukuyama, E.; Cocco, M.; Piatanesi, A.

    2005-12-01

    Several numerical approaches have been recently proposed to retrieve the evolution of dynamic traction during the earthquake propagation on extended faults. Although many studies have shown that the shear traction evolution as a function of time and/or slip may be complex, they all reveal an evident dynamic weakening behavior during faulting. The main dynamic parameters describing traction evolution are: the yield stress, the residual kinetic stress level and the characteristic slip weakening distance Dc. Recent investigations on real data yield the estimate of large Dc values on the fault plane and a correlation between Dc and the final slip. In this study, we focus our attention on the characteristic slip weakening distance Dc and on its variability on the fault plane. Different physical mechanisms have been proposed to explain the origin of Dc, some of them consider this parameter as a scale dependent quantity. We have computed the rupture history from several spontaneous dynamic models imposing a slip weakening law with prescribed Dc distributions on the fault plane. These synthetic models provide the slip velocity evolution during the earthquake rupture. We have therefore generated a set of slip velocity models by fitting the "true" slip velocity time histories with an analytical source time function. To this goal we use the Yoffe function [Tinti et al. 2005], which is dynamically consistent and allows a flexible parameterization. We use these slip velocity histories as a boundary condition on the fault plane to compute the traction evolution. We estimate the Dc values from the traction versus slip curves. We therefore compare the inferred Dc values with those of the original dynamic models and we found that the Dc estimates are very sensitive to the adopted slip velocity function. Despite the problem of resolution that limits the estimate of Dc from kinematic earthquake models and the tradeoff that exists between Dc and strength excess, we show that to

  7. The north-northwest aftershock pattern of the June 28, 1992 Landers earthquake and the probability of large earthquakes in Indian Wells Valley

    SciTech Connect

    Roquemore, G.R. . Dept. of Geosciences); Simila, G.A. . Dept. of Geological Sciences)

    1993-04-01

    Immediately following the June 28, 1992 Landers earthquake, a strong north-northwest pattern of aftershocks and triggered earthquakes developed. The most intense pattern developed between the north end of primary rupture on the Emerson fault and southern Owens Valley. The trend of seismicity cuts through the east-west trending Garlock fault at a high angle. The Garlock fault has no apparent affect on the trend or pattern. Within the aftershock zone, south of the Garlock fault, the Calico and Blackwater faults provide the most likely pathway for the Mojave shear zone into Indian Wells and Owens Valleys. In Indian Wells Valley the seismically active Little Lake fault aligns well with the Blackwater fault to the south and the southern Owens Valley fault zone to the north. Several recent research papers suggest that Optimum Coulomb failure stress changes caused by the Landers earthquake have enhanced the probability of earthquakes within the north-northwest trending aftershock zone. This increase has greater significance when the presumed Optimum Coulomb failure stress changes caused by the 1872 Owens Valley earthquake and its affects on Indian Wells Valley are considered. Indian Wells Valley and the Coso Volcanic field may have received two significant stress increases from earthquakes of magnitude 7.5 or greater in the last 120 years. If these two earthquakes increased the shear stress of aults in the Indian Wells/Coso areas, the most likely site for the next large earthquake within the Mojave shear zone may be there. The rate of seismicity within Indian Wells Valley had increased since 1980 including a magnitude 5.0 earthquake in 1982.

  8. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  9. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  10. Rotation change in the orientation of the center-of-figure frame caused by large earthquakes

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangcun; Sun, Wenke; Jin, Shuanggen; Sun, Heping; Xu, Jianqiao

    2016-05-01

    A method to estimate the rotation change in the orientation of the center-of-figure (CF) frame caused by earthquakes is the first time proposed. This method involves using the point dislocation theory based on a spherical, non-rotating, perfectly elastic and isotropic (SNREI) Earth. The rotation change in the orientation is related solely to the toroidal displacements of degree one induced by the vertical dip slip dislocation, and the spheroidal displacements induced by an earthquake have no contribution. The effects of two recent large earthquakes, the 2004 Sumatra and the 2011 Tohoku-Oki, are studied. Results showed that the Sumatra and Tohoku-Oki earthquakes both caused the CF frame to rotate by at least tens of μas (micro-arc-second). Although the visible co-seismic displacements are identified and removed from the coordinate time series, the rotation change due to the unidentified ones and errors in removal is non-negligible. Therefore, the rotation change in the orientation of the CF frame due to seismic deformation should be taken into account in the future in reference frame and geodesy applications.

  11. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.

  12. Rotation change in the orientation of the centre-of-figure frame caused by large earthquakes

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangcun; Sun, Wenke; Jin, Shuanggen; Sun, Heping; Xu, Jianqiao

    2016-08-01

    A method to estimate the rotation change in the orientation of the centre-of-figure (CF) frame caused by earthquakes is proposed for the first time. This method involves using the point dislocation theory based on a spherical, non-rotating, perfectly elastic and isotropic (SNREI) Earth. The rotation change in the orientation is related solely to the toroidal displacements of degree one induced by the vertical dip slip dislocation, and the spheroidal displacements induced by an earthquake have no contribution. The effects of two recent large earthquakes, the 2004 Sumatra and the 2011 Tohoku-Oki, are studied. Results showed that the Sumatra and Tohoku-Oki earthquakes both caused the CF frame to rotate by at least tens of μas (micro-arc-second). Although the visible co-seismic displacements are identified and removed from the coordinate time-series, the rotation change due to the unidentified ones and errors in removal is non-negligible. Therefore, the rotation change in the orientation of the CF frame due to seismic deformation should be taken into account in the future in reference frame and geodesy applications.

  13. Tectonic stress field of China inferred from a large number of small earthquakes

    NASA Astrophysics Data System (ADS)

    1992-07-01

    Mean principal tress axes were inferred for China using 9621 P wave first motion polarity readings from 5054 small earthquakes (1<=ML<=5). The area studied was divided into 76 subregions. The mean P (compressive), B (intermediate), and T (relatively tensional) axes corresponding to composite focal mechanism solutions of multiple earthquakes for each subregion were determined by a grid test to all possible orientations of the P, B, and T axes with a step of 5° or 10°. In order to get a relatively homogeneous sampling in space we have avoided using the readings from spatially and temporally clustered earthquakes. We have rechecked most of the polarity readings by inspection of original seismograms. The focal mechanism solutions of large (M>=6) individual earthquakes are also presented for comparison. The results indicate the existence of an overall radial pattern of the present-day maximum horizontal stress orientations throughout the continental area of China. This pattern is thought to be closely related to indentor models of continental collision between the Indian Ocean and the Eurasian plate.

  14. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  15. Documenting large earthquakes similar to the 2011 Tohoku-oki earthquake from sediments deposited in the Japan Trench over the past 1500 years

    NASA Astrophysics Data System (ADS)

    Ikehara, Ken; Kanamatsu, Toshiya; Nagahashi, Yoshitaka; Strasser, Michael; Fink, Hiske; Usami, Kazuko; Irino, Tomohisa; Wefer, Gerold

    2016-07-01

    The 2011 Tohoku-oki earthquake and tsunami was the most destructive geohazard in Japanese history. However, little is known of the past recurrence of large earthquakes along the Japan Trench. Deep-sea turbidites are potential candidates for understanding the history of such earthquakes. Core samples were collected from three thick turbidite units on the Japan Trench floor near the epicenter of the 2011 event. The uppermost unit (Unit TT1) consists of amalgamated diatomaceous mud (30-60 cm thick) that deposited from turbidity currents triggered by shallow subsurface instability on the lower trench slope associated with strong ground motion during the 2011 Tohoku-oki earthquake. Older thick turbidite units (Units TT2 and TT3) also consist of several amalgamated subunits that contain thick sand layers in their lower parts. Sedimentological characteristics and tectonic and bathymetric settings of the Japan Trench floor indicate that these turbidites also originated from two older large earthquakes of potentially similar to the 2011 Tohoku-oki earthquake. A thin tephra layer between Units TT2 and TT3 constrains the age of these earthquakes. Geochemical analysis of volcanic glass shards within the tephra layer indicates that it is correlative to the Towada-a tephra (AD 915) from the Towada volcano in northeastern Japan. The stratigraphy of the Japan Trench turbidites resembles that of onshore tsunami deposits on the Sendai and Ishinomaki plains, indicating that the cored uppermost succession of the Japan Trench comprises a 1500-yr-old record that includes the sedimentary fingerprint of the historical Jogan earthquake of AD 869.

  16. Large-scale polarimetry of large optical galaxies

    NASA Astrophysics Data System (ADS)

    Sholomitskii, G. B.; Maslov, I. A.; Vitrichenko, E. A.

    1999-11-01

    We present preliminary results of wide-field visual CCD polarimetry for large optical galaxies through a concentric multisector radial-tangential polaroid analyzer mounted at the intermediate focus of a Zeiss-1000 telescope. The mean degree of tangential polarization in a 13-arcmin field, which was determined by processing images with imprinted ``orthogonal'' sectors, ranges from several percent (M 82) and 0.51% (the spirals M 51, M 81) to lower values for elliptical galaxies (M 49, M 87). It is emphasized that the parameters of large-scale polarization can be properly determined by using physical models for galaxies; inclination and azimuthal dependences of the degree of polarization are given for spirals.

  17. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    NASA Astrophysics Data System (ADS)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  18. The SCEC-USGS Dynamic Earthquake Rupture Code Comparison Exercise - Simulations of Large Earthquakes and Strong Ground Motions

    NASA Astrophysics Data System (ADS)

    Harris, R.

    2015-12-01

    I summarize the progress by the Southern California Earthquake Center (SCEC) and U.S. Geological Survey (USGS) Dynamic Rupture Code Comparison Group, that examines if the results produced by multiple researchers' earthquake simulation codes agree with each other when computing benchmark scenarios of dynamically propagating earthquake ruptures. These types of computer simulations have no analytical solutions with which to compare, so we use qualitative and quantitative inter-code comparisons to check if they are operating satisfactorily. To date we have tested the codes against benchmark exercises that incorporate a range of features, including single and multiple planar faults, single rough faults, slip-weakening, rate-state, and thermal pressurization friction, elastic and visco-plastic off-fault behavior, complete stress drops that lead to extreme ground motion, heterogeneous initial stresses, and heterogeneous material (rock) structure. Our goal is reproducibility, and we focus on the types of earthquake-simulation assumptions that have been or will be used in basic studies of earthquake physics, or in direct applications to specific earthquake hazard problems. Our group's goals are to make sure that when our earthquake-simulation codes simulate these types of earthquake scenarios along with the resulting simulated strong ground shaking, that the codes are operating as expected. For more introductory information about our group and our work, please see our group's overview papers, Harris et al., Seismological Research Letters, 2009, and Harris et al., Seismological Research Letters, 2011, along with our website, scecdata.usc.edu/cvws.

  19. The transpressive tectonics and large earthquake distribution along the plate boundary in North Africa

    NASA Astrophysics Data System (ADS)

    Meghraoui, Mustapha; Pondrelli, Silvia

    2010-05-01

    The Tell Atlas and Rif Mountains of northern Africa have been the site of several large and moderate seismic events in the last decades. However, the thrust and fold system of NW Algeria experienced the largest earthquakes in the last centuries along the Africa-Eurasia plate boundary. This shallow seismic activity was very often associated with surface faulting and deformation as for the Mw 7.3 El Asnam (10/10/1980) and the Mw 6.8 Zemmouri-Boumerdes (21/05/2003) earthquakes. We study the active tectonics along the plate boundary in North Africa from the seismicity database, individual large and moderate earthquakes, the seismic moment tensor summation, the geodetic measurements (GPS and InSAR) and the structural and kinematic of active faults. Neotectonic structures and significant seismicity (Mw>5) indicate that coeval east-west trending right-lateral faulting and NE-SW thrust-related folding result from the oblique convergence at the plate boundary. A simple modeling of block tectonics suggests that transpression and block rotation govern the mechanics of the Africa - Eurasia plate boundary in the Tell Atlas and Rif Mountains. The tectonic restraining bend of NW Algeria combined with the ~ 5 mm/yr convergence between Africa and Eurasia justify the large seismic activity on the thrust and fold system of the Tell Atlas and the relatively passive active deformation along the adjacent sections of the plate boundary.

  20. Rock magnetism constrain response thickness on earth surface to large earthquake: Evidence from the Bajiaomiao Outcrop of the Wenchuan Earthquake Rupture Zone, China

    NASA Astrophysics Data System (ADS)

    Liu, D.; Li, H.; Lee, T.; Song, S.; Sun, Z.; Wang, X.; Chou, Y.; Chevalier, M.; Si, J.; Wang, H.

    2013-12-01

    The 2008 Mw 7.9 Wenchuan Earthquake has raptured along two fault zones, the Yingxiu-Beichuan f and the Anxian-Guanxian fault zones. The Wenchuan earthquake Fault Scientific Drilling project (WFSD) funded by the Chinese government, drilled five holes close to the two seismic fault zones. Fault gouge with various thicknesses were found in the drill holes and at the earth surface outcrops. In general, one such large earthquake creates several centimeters-thick fault gouge, i.e. the repeated large earthquakes must have taken place in the Longmen Shan region in order to accumulate the amount of gouge observed here. Rock magnetism is an economic, easy-access and non-destructive method for deciphering the magnetic mineral assemblage during large earthquake slip process, which can give us more information about this intercontinental earthquake dynamics, thanks to the occurrence of many large earthquakes as well as to the thick fault gouge present here. The Bajiaomiao outcrop, crossing to the Yingxiu-Beichuan seismic fault rapture zone, consisted of fault breccia and gouge in the hanging wall and Quaternary conglomerate in the footwall. The samples from the hanging wall of this outcrop were used to study rock magnetic porperties. Basing on the in-situ field magnetic susceptibility measurement, high magnetic susceptibility values were found in the fault gouge, possibly induced by more new-formed ferrimagnetic minerals. We apply other rock magnetic methods (such as Isothermal Remanent Magnetization (IRM), high-temperature thermomagnetism (K-T)) to the samples from the Bajiaomiao outcrop. The IRM results show that the magnetite was present in the gouge and fault breccia of the hand hall of the Yingxiu-Beichuan seismic fault rapture zone. Basing on the K-T results, magnetite and other ferromagnetic minerals existed in the gouge and fault breccia; the <2 cm thick gouge close to the fault rapture zone had existed the only magnetic mineral of magnetite. This < 2cm gouge was most

  1. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  2. Chronology of historical tsunamis in Mexico and its relation to large earthquakes along the subduction zone

    NASA Astrophysics Data System (ADS)

    Suarez, G.; Mortera, C.

    2013-05-01

    The chronology of historical earthquakes along the subduction zone in Mexico spans a time period of approximately 400 years. Although the population density along the coast of Mexico has always been low, relative to that of central Mexico, several of the large subduction earthquakes reports include references to the presence of tsunamis invading the southern coast of Mexico. Here we present a chronology of historical tsunamis affecting the Pacific coast of Mexico and compare this with the historical record of subduction events and to the existing Mexican and worldwide catalogs of tsunamis in the Pacific basin. Due to the geographical orientation of the Pacific coat of Mexico, tsunamis generated on the other subduction zones of the Pacific have not had damaging effects in the country. Among the tsunamis generated by local earthquakes, the largest one by far is the one produced by the earthquake of 28 March 1787. The reported tsunami has an inundation area that reaches for over 6 km inland. The length of the coast where the tsunami was reported extends for over 450 km. In the last 100 years two large tsunamis have been reported along the Pacific coast of Mexico. On 22 June 1932 a tsunami with reported wave heights of up to 11 m hit the coast of Jalisco and Colima. The town of Cuyutlan was heavily damaged and approximately 50 people lost their lives do to the impact of the tsunami. This unusual tsunami was generated by an aftershock (M 6.9) of the large 3 June 1932 event (M 8.1). The main shock of 3 June did not produce a perceptible tsunami. It has been proposed that the 22 June event is a tsunami earthquake generated on the shallow part of the subduction zone. On 16 November 1925 an unusual tsunami was reported in the town of Zihuatanejo in the state of Guerrero, Mexico. No earthquake on the Pacific rim occurs at the same time as this tsunami and the historical record of hurricanes and tropical storms do not list the presence of a meteorological disturbance that

  3. Potential for Large Transpressional Earthquakes along the Santa Cruz-Catalina Ridge, California Continental Borderland

    NASA Astrophysics Data System (ADS)

    Legg, M.; Kohler, M. D.; Weeraratne, D. S.; Castillo, C. M.

    2015-12-01

    Transpressional fault systems comprise networks of high-angle strike-slip and more gently-dipping oblique-slip faults. Large oblique-slip earthquakes may involve complex ruptures of multiple faults with both strike-slip and dip-slip. Geophysical data including high-resolution multibeam bathymetry maps, multichannel seismic reflection (MCS) profiles, and relocated seismicity catalogs enable detailed mapping of the 3-D structure of seismogenic fault systems offshore in the California Continental Borderland. Seafloor morphology along the San Clemente fault system displays numerous features associated with active strike-slip faulting including scarps, linear ridges and valleys, and offset channels. Detailed maps of the seafloor faulting have been produced along more than 400 km of the fault zone. Interpretation of fault geometry has been extended to shallow crustal depths using 2-D MCS profiles and to seismogenic depths using catalogs of relocated southern California seismicity. We examine the 3-D fault character along the transpressional Santa Cruz-Catalina Ridge (SCCR) section of the fault system to investigate the potential for large earthquakes involving multi-fault ruptures. The 1981 Santa Barbara Island (M6.0) earthquake was a right-slip event on a vertical fault zone along the northeast flank of the SCCR. Aftershock hypocenters define at least three sub-parallel high-angle fault surfaces that lie beneath a hillside valley. Mainshock rupture for this moderate earthquake appears to have been bilateral, initiating at a small discontinuity in the fault geometry (~5-km pressure ridge) near Kidney Bank. The rupture terminated to the southeast at a significant releasing step-over or bend and to the northeast within a small (~10-km) restraining bend. An aftershock cluster occurred beyond the southeast asperity along the East San Clemente fault. Active transpression is manifest by reverse-slip earthquakes located in the region adjacent to the principal displacement zone

  4. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  5. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  6. Hayward Fault: A 50-km-long Locked Patch Regulates Its Large Earthquake Cycle (Invited)

    NASA Astrophysics Data System (ADS)

    Lienkaemper, J. J.; Simpson, R. W.; Williams, P. L.; McFarland, F. S.; Caskey, S. J.

    2010-12-01

    We have documented a chronology of 11 paleoearthquakes on the southern Hayward fault (HS) preceding the Mw6.8, 1868 earthquake. These large earthquakes were both regular and frequent, as indicated by a 0.40 coefficient of variation and mean recurrence interval (MRI) of 161 ± 65 yr (1σ of recurrence intervals). Furthermore, the Oxcal-modeled probability distribution for the average interval resembles a Gaussian rather than a more irregular Brownian passage time distribution. Our revised 3D-modeling of subsurface creep, using newly updated long-term creep rates, now suggests there is only one ~50-km-long locked patch (instead of two), confined laterally between two large patches of deep creep (≥9 km), with an extent consistent with evidence for the 1868 rupture. This locked patch and the fault’s lowest rates of surface creep are approximately centered on HS’s largest bend and a large gabbro body, particularly where the gabbro forms both east and west faces of the fault. We suggest that this locked patch serves as a mechanical capacitor, limiting earthquake size and frequency. The moment accumulation over 161 yr summed on all locked elements of the model reaches Mw6.79, but if half of the moment stored in the creeping elements were to fail dynamically, Mw could reach 6.91. The paleoearthquake histories for nearby faults of the San Francisco Bay region appear to indicate less regular and frequent earthquakes, possibly because most lack the high proportion (40-60%) of aseismic release found on the Hayward fault. The northernmost Hayward fault and Rodgers Creek fault (RCF) appear to rupture only half as frequently as the HS and are separated from the HS by a creep buffer and 5-km wide releasing bend respectively, both tending to limit through-going ruptures. The paleoseismic record allows multi-segment, Hayward fault-RCF ruptures, but does not require it. The 1868 HS rupture preceded the 1906 multi-segmented San Andreas fault (SAF) rupture, perhaps because the HS

  7. Precision Measurement of Large Scale Structure

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2001-01-01

    The purpose of this grant was to develop and to start to apply new precision methods for measuring the power spectrum and redshift distortions from the anticipated new generation of large redshift surveys. A highlight of work completed during the award period was the application of the new methods developed by the PI to measure the real space power spectrum and redshift distortions of the IRAS PSCz survey, published in January 2000. New features of the measurement include: (1) measurement of power over an unprecedentedly broad range of scales, 4.5 decades in wavenumber, from 0.01 to 300 h/Mpc; (2) at linear scales, not one but three power spectra are measured, the galaxy-galaxy, galaxy-velocity, and velocity-velocity power spectra; (3) at linear scales each of the three power spectra is decorrelated within itself, and disentangled from the other two power spectra (the situation is analogous to disentangling scalar and tensor modes in the Cosmic Microwave Background); and (4) at nonlinear scales the measurement extracts not only the real space power spectrum, but also the full line-of-sight pairwise velocity distribution in redshift space.

  8. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  9. Global large deep-focus earthquakes: Source process and cascading failure of shear instability as a unified physical mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wen, Lianxing

    2015-08-01

    We apply a multiple source inversion method to systematically study the source processes of 25 large deep-focus (depth >400 km) earthquakes with Mw > 7.0 from 1994 to 2012, based on waveform modeling of P, pP, SH and sSH wave data. The earthquakes are classified into three categories based on spatial distributions and focal mechanisms of the inferred sub-events: 1) category one, with non-planar distribution and variable focal mechanisms of sub-events, represented by the 1994 Mw 8.2 Bolivia earthquake and the 2013 Mw 8.3 Okhotsk earthquake; 2) category two, with planar distribution but focal mechanisms inconsistent with the plane, including eighteen earthquakes; and 3) category three, with planar distribution and focal mechanisms consistent with the plane, including six earthquakes. We discuss possible physical mechanisms for earthquakes in each category in the context of plane rupture, transformational faulting and shear thermal instability. We suggest that the inferred source processes of large deep-focus earthquakes can be best interpreted by cascading failure of shear thermal instabilities in pre-existing weak zones, with the perturbation of stress generated by a shear instability triggering another and focal mechanisms of the sub-events controlled by orientations of the pre-existing weak zones. The proposed mechanism can also explain the observed great variability of focal mechanisms, the presence of large values of CLVD (Compensated Linear Vector Dipole) and the super-shear rupture of deep-focus earthquakes in the previous studies. In addition, our studies suggest existence of relationships of seismic moment ∼ (source duration)3 and moment ∼ (source dimension)3 in large deep-focus earthquakes.

  10. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  11. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  12. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  13. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Marriage, Tobias; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Eimer, J.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    Some of the most compelling inflation models predict a background of primordial gravitational waves (PGW) detectable by their imprint of a curl-like "B-mode" pattern in the polarization of the Cosmic Microwave Background (CMB). The Cosmology Large Angular Scale Surveyor (CLASS) is a novel array of telescopes to measure the B-mode signature of the PGW. By targeting the largest angular scales (>2°) with a multifrequency array, novel polarization modulation and detectors optimized for both control of systematics and sensitivity, CLASS sets itself apart in the field of CMB polarization surveys and opens an exciting new discovery space for the PGW and inflation. This poster presents an overview of the CLASS project.

  14. Scaling relations for large Martian valleys

    NASA Astrophysics Data System (ADS)

    Som, Sanjoy M.; Montgomery, David R.; Greenberg, Harvey M.

    2009-02-01

    The dendritic morphology of Martian valley networks, particularly in the Noachian highlands, has long been argued to imply a warmer, wetter early Martian climate, but the character and extent of this period remains controversial. We analyzed scaling relations for the 10 large valley systems incised in terrain of various ages, resolvable using the Mars Orbiter Laser Altimeter (MOLA) and the Thermal Emission Imaging System (THEMIS). Four of the valleys originate in point sources with negligible contributions from tributaries, three are very poorly dissected with a few large tributaries separated by long uninterrupted trunks, and three exhibit the dendritic, branching morphology typical of terrestrial channel networks. We generated width-area and slope-area relationships for each because these relations are identified as either theoretically predicted or robust terrestrial empiricisms for graded precipitation-fed, perennial channels. We also generated distance-area relationships (Hack's law) because they similarly represent robust characteristics of terrestrial channels (whether perennial or ephemeral). We find that the studied Martian valleys, even the dendritic ones, do not satisfy those empiricisms. On Mars, the width-area scaling exponent b of -0.7-4.7 contrasts with values of 0.3-0.6 typical of terrestrial channels; the slope-area scaling exponent $\\theta$ ranges from -25.6-5.5, whereas values of 0.3-0.5 are typical on Earth; the length-area, or Hack's exponent n ranges from 0.47 to 19.2, while values of 0.5-0.6 are found on Earth. None of the valleys analyzed satisfy all three relations typical of terrestrial perennial channels. As such, our analysis supports the hypotheses that ephemeral and/or immature channel morphologies provide the closest terrestrial analogs to the dendritic networks on Mars, and point source discharges provide terrestrial analogs best suited to describe the other large Martian valleys.

  15. Fast Estimate of Rupture Process of Large Earthquakes via Real Time Hi-net Data

    NASA Astrophysics Data System (ADS)

    Wang, D.; Kawakatsu, H.; Mori, J. J.

    2014-12-01

    We developed a real time system based on Hi-net seismic array that can offer fast and reliable source information, for example, source extent and rupture velocity, for earthquakes that occur at distance of roughly 30°- 85°with respect to the array center. We perform continuous grid search on a Hi-net real time data stream to identify possible source locations (following Nishida, K., Kawakatsu, H., and S. Obara, 2008). Earthquakes that occurred off the bright area of the array (30°- 85°with respect to the array center) will be ignored. Once a large seismic event is identified successfully, back-projection will be implemented to trace the source propagation and energy radiation. Results from extended global GRiD-MT and real time W phase inversion will be combined for the better identification of large seismic events. The time required is mainly due to the travel time from the epicenter to the array stations, so we can get the results between 6 to 13 min depending on the epicenter distances. This system can offer fast and robust estimates of earthquake source information, which will be useful for disaster mitigation, such as tsunami evacuation, emergency rescue, and aftershock hazard evaluation.

  16. The Validity and Reliability Work of the Scale That Determines the Level of the Trauma after the Earthquake

    ERIC Educational Resources Information Center

    Tanhan, Fuat; Kayri, Murat

    2013-01-01

    In this study, it was aimed to develop a short, comprehensible, easy, applicable, and appropriate for cultural characteristics scale that can be evaluated in mental traumas concerning earthquake. The universe of the research consisted of all individuals living under the effects of the earthquakes which occurred in Tabanli Village on 23.10.2011 and…

  17. Global and along-strike variations of source duration and scaling for intermediate-depth and deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Poli, Piero; Prieto, German

    2014-12-01

    The systematic behavior of earthquake rupture as a function of earthquake magnitude and/or tectonic setting is a key in our understanding of the physical mechanisms involved during earthquake rupture. Geophysical evidence suggests that although deep earthquakes—including intermediate-depth and deep—are similar to shallow ones, the mechanism involved during deep earthquakes is different from that of shallow ones. In particular, the magnitude and depth dependence of scaled duration, a measure of earthquake rupture duration, has led to controversy of what controls deep earthquake behavior. Here we estimate scaled source durations for 600 intermediate-depth and deep-focus earthquakes recorded at teleseismic distances and show deviation from self-similar scaling. No depth dependence is observed which we interpret as due to little differences between intermediate-depth and deep-focus earthquake mechanisms. The data show no correlation between durations and plate age or thermal parameters, suggesting that the thermal properties of the plate have little effect on source durations. We nevertheless report differences in average source duration and scaling between subduction zones and along-strike variations of source durations that more closely resemble the geometry of subduction (flat or steep subduction) rather than plate age.

  18. Patterns of Seismicity Characterizing the Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Yoder, M. R.; Holliday, J. R.; Schultz, K.; Wilson, J. M.; Donnellan, A.; Grant Ludwig, L.

    2015-12-01

    A number of methods to calculate probabilities of major earthquakes have recently been proposed. Most of these methods depend upon understanding patterns of small earthquakes preceding the large events. For example, the Natural Time Weibull method for earthquake forecasting (see www.openhazards.com) is based on the assumption that large earthquakes complete the Gutenberg-Richter scaling relation defined by the smallest earthquakes. Here we examine the scaling patterns of small earthquakes having magnitudes between cycles of large earthquakes. For example, in the region of California-Nevada between longitudes -130 to -114 degrees W, and latitudes 32 to 45 degrees North, we find 79 earthquakes having magnitudes M6 during the time interval 1933 - present, culminating with the most recent event, the M6.0 Napa, California earthquake of August 24, 2014. Thus we have 78 complete cycles of large earthquakes in this region. After compiling and stacking the smaller events occurring between the large events, we find a characteristic pattern of scaling for the smaller events. This pattern shows a scaling relation for the smallest earthquakes up to about 3earthquakes for 4.5scaling line are 0.85 for the entire interval 1933- present. Extrapolation of the small-magnitude scaling line indicates that the average cycle tends to be completed by a large earthquake having M~6.4. In addition, statistics indicate that departure of the successive earthquake cycles from their average pattern can be characterized by Coefficients of Variability and other measures. We discuss these ideas and apply them not only to California, but also to other seismically active areas in the world

  19. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  20. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  1. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  2. Nonthermal Components in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco

    2004-12-01

    I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to γ-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of γ-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration.

  3. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  4. Fault Interactions and Large Complex Earthquakes in the Los Angeles Area

    USGS Publications Warehouse

    Anderson, G.; Aagaard, B.; Hudnut, K.

    2003-01-01

    Faults in complex tectonic environments interact in various ways, including triggered rupture of one fault by another, that may increase seismic hazard in the surrounding region. We model static and dynamic fault interactions between the strike-slip and thrust fault systems in southern California. We find that rupture of the Sierra Madre-Cucamonga thrust fault system is unlikely to trigger rupture of the San Andreas or San Jacinto strike-slip faults. However, a large northern San Jacinto fault earthquake could trigger a cascading rupture of the Sierra Madre-Cucamonga system, potentially causing a moment magnitude 7.5 to 7.8 earthquake on the edge of the Los Angeles metropolitan region.

  5. Irreversible thermodynamic model for accelerated moment release and atmospheric radon concentration prior to large earthquakes

    NASA Astrophysics Data System (ADS)

    Kawada, Y.; Nagahama, H.; Omori, Y.; Yasuoka, Y.; Shinogi, M.

    2006-12-01

    Accelerated moment release is often preceded by large earthquakes, and defined by rate of cumulative Benioff strain following power-law time-to-failure relation. This temporal seismicity pattern is investigated in terms of irreversible thermodynamics model. The model is regulated by the Helmholtz free energy defined by the macroscopic stress-strain relation and internal state variables (generalized coordinates). Damage and damage evolution are represented by the internal state variables. In the condition, huge number of the internal state variables has each specific relaxation time, while a set of the time evolution shows a temporal power-law behavior. The irreversible thermodynamic model reduces to a fiber-bundle model and experimentally-based constitutive law of rocks, and predicts the form of accelerated moment release. Based on the model, we can also discuss the increase in atmospheric radon concentration prior to the 1995 Kobe earthquake.

  6. Fault interactions and large complex earthquakes in the Los Angeles area.

    PubMed

    Anderson, Greg; Aagaard, Brad; Hudnut, Ken

    2003-12-12

    Faults in complex tectonic environments interact in various ways, including triggered rupture of one fault by another, that may increase seismic hazard in the surrounding region. We model static and dynamic fault interactions between the strike-slip and thrust fault systems in southern California. We find that rupture of the Sierra Madre-Cucamonga thrust fault system is unlikely to trigger rupture of the San Andreas or San Jacinto strike-slip faults. However, a large northern San Jacinto fault earthquake could trigger a cascading rupture of the Sierra Madre-Cucamonga system, potentially causing a moment magnitude 7.5 to 7.8 earthquake on the edge of the Los Angeles metropolitan region. PMID:14671298

  7. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  8. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  9. Earthquake Interactions at Different Scales: an Example from Eastern California and Western Nevada, USA.

    NASA Astrophysics Data System (ADS)

    Verdecchia, A.; Carena, S.

    2015-12-01

    Earthquakes in diffuse plate boundaries occur in spatially and temporally complex patterns. The region east of the Sierra Nevada that encompasses the northern Eastern California Shear Zone (ECSZ), Walker Lane (WL), and the westernmost part of the Basin and Range province (B&R) is such a kind of plate boundary. In order to better understand the relationship between moderate-to major earthquakes in this area, we modeled the evolution of coseismic, postseismic and interseismic Coulomb stress changes (∆CFS) in this region at two different spatio-temporal scales. In the first example we examined seven historical and instrumental Mw ≥ 6 earthquakes that struck the region around Owens Valley (northern ECSZ) in the last 150 years. In the second example we expanded our study area to all of the northern ECSZ, WL and western B&R, examining seventeen paleoseismological and historical major surface-rupturing earthquakes (Mw ≥ 6.5) that occurred in the last 1400 years. We show that in both cases the majority of the studied events (100% in the first case and 80% in the second) are located in areas of combined coseismic and postseismic positive ∆CFS. This relationship is robust, as shown by control tests with random earthquake sequences. We also show that the White Mountain fault has accumulated up to 30 bars of total ∆CFS (coseismic + postseismic + interseismic) in the last 150 years, and the Hunter Mountain, Fish Lake Valley, Black Mountain, and Pyramid Lake faults have accumulated 40, 45, 54 and 37 bars respectively in the last 1400 years. Such values are comparable to the average stress drop in a major earthquake, and all these faults may be therefore close to failure.

  10. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  11. Foreshock patterns preceding large earthquakes in the subduction zone of Chile

    NASA Astrophysics Data System (ADS)

    Minadakis, George; Papadopoulos, Gerassimos A.

    2016-04-01

    Some of the largest earthquakes in the globe occur in the subduction zone of Chile. Therefore, it is of particular interest to investigate foreshock patterns preceding such earthquakes. Foreshocks in Chile were recognized as early as 1960. In fact, the giant (Mw9.5) earthquake of 22 May 1960, which was the largest ever instrumentally recorded, was preceded by 45 foreshocks in a time period of 33h before the mainshock, while 250 aftershocks were recorded in a 33h time period after the mainshock. Four foreshocks were bigger than magnitude 7.0, including a magnitude 7.9 on May 21 that caused severe damage in the Concepcion area. More recently, Brodsky and Lay (2014) and Bedford et al. (2015) reported on foreshock activity before the 1 April 2014 large earthquake (Mw8.2). However, 3-D foreshock patterns in space, time and size were not studied in depth so far. Since such studies require for good seismic catalogues to be available, we have investigated 3-D foreshock patterns only before the recent, very large mainshocks occurring on 27 February 2010 (Mw 8.8), 1 April 2014 (Mw8.2) and 16 September 2015 (Mw8.4). Although our analysis does not depend on a priori definition of short-term foreshocks, our interest focuses in the short-term time frame, that is in the last 5-6 months before the mainshock. The analysis of the 2014 event showed an excellent foreshock sequence consisting by an early-weak foreshock stage lasting for about 1.8 months and by a main-strong precursory foreshock stage that was evolved in the last 18 days before the mainshock. During the strong foreshock period the seismicity concentrated around the mainshock epicenter in a critical area of about 65 km mainly along the trench domain to the south of the mainshock epicenter. At the same time, the activity rate increased dramatically, the b-value dropped and the mean magnitude increased significantly, while the level of seismic energy released also increased. In view of these highly significant seismicity

  12. Stress changes, focal mechanisms, and earthquake scaling laws for the 2000 dike at Miyakejima (Japan)

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Cesca, Simone; Aoki, Yosuke

    2015-06-01

    Faulting processes in volcanic areas result from a complex interaction of pressurized fluid-filled cracks and conduits with the host rock and local and regional tectonic setting. Often, volcanic seismicity is difficult to decipher in terms of the physical processes involved, and there is a need for models relating the mechanics of volcanic sources to observations. Here we use focal mechanism data of the energetic swarm induced by the 2000 dike intrusion at Miyakejima (Izu Archipelago, Japan), to study the relation between the 3-D dike-induced stresses and the characteristics of the seismicity. We perform a clustering analysis on the focal mechanism (FM) solutions and relate them to the dike stress field and to the scaling relationships of the earthquakes. We find that the strike and rake angles of the FMs are strongly correlated and cluster on bands in a strike-rake plot. We suggest that this is consistent with optimally oriented faults according to the expected pattern of Coulomb stress changes. We calculate the frequency-size distribution of the clustered sets finding that focal mechanisms with a large strike-slip component are consistent with the Gutenberg-Richter relation with a b value of about 1. Conversely, events with large normal faulting components deviate from the Gutenberg-Richter distribution with a marked roll-off on its right-hand tail, suggesting a lack of large-magnitude events (Mw > 5.5). This may result from the interplay of the limited thickness and lower rock strength of the layer of rock above the dike, where normal faulting is expected, and lower stress levels linked to the faulting style and low confining pressure.

  13. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  14. Large-scale Globally Propagating Coronal Waves

    NASA Astrophysics Data System (ADS)

    Warmuth, Alexander

    2015-09-01

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  15. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  16. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  17. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  18. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  19. Scaling relations of moment magnitude, local magnitude, and duration magnitude for earthquakes originated in northeast India

    NASA Astrophysics Data System (ADS)

    Bora, Dipok K.

    2016-06-01

    In this study, we aim to improve the scaling between the moment magnitude ( M W), local magnitude ( M L), and the duration magnitude ( M D) for 162 earthquakes in Shillong-Mikir plateau and its adjoining region of northeast India by extending the M W estimates to lower magnitude earthquakes using spectral analysis of P-waves from vertical component seismograms. The M W- M L and M W- M D relationships are determined by linear regression analysis. It is found that, M W values can be considered consistent with M L and M D, within 0.1 and 0.2 magnitude units respectively, in 90 % of the cases. The scaling relationships investigated comply well with similar relationships in other regions in the world and in other seismogenic areas in the northeast India region.

  20. Impact of a Large San Andreas Fault Earthquake on Tall Buildings in Southern California

    NASA Astrophysics Data System (ADS)

    Krishnan, S.; Ji, C.; Komatitsch, D.; Tromp, J.

    2004-12-01

    In 1857, an earthquake of magnitude 7.9 occurred on the San Andreas fault, starting at Parkfield and rupturing in a southeasterly direction for more than 300~km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. The strong shaking in the basins due to this earthquake would have had a significant long-period content (2--8~s). If such motions were to happen today, they could have a serious impact on tall buildings in Southern California. In order to study the effects of large San Andreas fault earthquakes on tall buildings in Southern California, we use the finite source of the magnitude 7.9 2001 Denali fault earthquake in Alaska and map it onto the San Andreas fault with the rupture originating at Parkfield and proceeding southward over a distance of 290~km. Using the SPECFEM3D spectral element seismic wave propagation code, we simulate a Denali-like earthquake on the San Andreas fault and compute ground motions at sites located on a grid with a 2.5--5.0~km spacing in the greater Southern California region. We subsequently analyze 3D structural models of an existing tall steel building designed in 1984 as well as one designed according to the current building code (Uniform Building Code, 1997) subjected to the computed ground motion. We use a sophisticated nonlinear building analysis program, FRAME3D, that has the ability to simulate damage in buildings due to three-component ground motion. We summarize the performance of these structural models on contour maps of carefully selected structural performance indices. This study could benefit the city in laying out emergency response strategies in the event of an earthquake on the San Andreas fault, in undertaking appropriate retrofit measures for tall buildings, and in formulating zoning regulations for new construction. In addition, the study would provide risk data associated with existing and new construction to insurance companies, real estate developers, and

  1. No evidence of unusually large postseismic deformation in Andaman region immediately after 2004 Sumatra-Andaman earthquake

    NASA Astrophysics Data System (ADS)

    Gahalaut, V. K.; Catherine, J. K.; Jade, Sridevi; Gireesh, R.; Gupta, D. C.; Narsaiah, M.; Ambikapathy, A.; Bansal, A.; Chadha, R. K.

    2008-05-01

    Static offsets due to the 26 December 2004 Sumatra-Andaman earthquake have been reported from the campaign mode GPS measurements in the Andaman-Nicobar region. However, these measurements contain contributions from postseismic deformation that must have occurred in the 16-25 days period between the earthquake and the measurements. We analyse these and tide gauge measurements of coseismic deformation, a longer time series of postseismic deformation from GPS measurements at Port Blair in the South Andaman and aftershocks, to suggest that postseismic displacement not larger than 7 cm occurred in the 16-25 days following the earthquake in the South Andaman and probably elsewhere in the Andaman Nicobar region. Earlier, this contribution was estimated to be as large as 1 m in the Andaman region, which implied that the magnitude of the earthquake based on these campaign mode measurements should be decreased. We suggest an Mw for this earthquake as 9.23.

  2. Is an unusual large enhancement of ionospheric electron density linked with the 2008 great Wenchuan earthquake?

    NASA Astrophysics Data System (ADS)

    Zhao, Biqiang; Wang, Min; Yu, Tao; Wan, Weixing; Lei, Jiuhou; Liu, Libo; Ning, Baiqi

    2008-11-01

    On 12 May 2008 at 0628 UT a major earthquake Ms = 8.0 struck Wenchuan County (31.0°N, 103.4°E) in southwest China. The maximum ionospheric electron density at F2 peak (NmF2) recorded an unusual large enhancement during the afternoon-sunset sector by the Chinese ionosondes over Wuhan (30.5°N, 114.4°E) and Xiamen (24.4°N, 123.9°E), which are close to the earthquake epicenter. An averaged increase at these two stations is about 2 times on a geomagnetic quiet day, 9 May (Kp ≤ 2), 3 days prior to the earthquake, relative to the median value of 1-12 May, whereas the increase was much less significant over Yamagawa (31.2°N, 130.6°E) and Okinawa (26.7°N, 128.2°E) in Japan. Combining the data from the network of 58 global positioning system receivers around China and the global ionospheric map, the variations of the total electron content reveal the region where enhancement persisted for a long period to be within longitudes 90°-130°E. Our results suggest that this abnormal enhancement is most possibly a seismo-ionospheric signature.

  3. Study of the Seismic Cycle of large Earthquakes in central Peru: Lima Region

    NASA Astrophysics Data System (ADS)

    Norabuena, E. O.; Quiroz, W.; Dixon, T. H.

    2009-12-01

    Since historical times, the Peruvian subduction zone has been source of large and destructive earthquakes. The more damaging one occurred on May 30 1970 offshore Peru’s northern city of Chimbote with a death toll of 70,000 people and several hundred US million dollars in property damage. More recently, three contiguous plate interface segments in southern Peru completed their seismic cycle generating the 1996 Nazca (Mw 7.1), the 2001 Atico-Arequipa (Mw 8.4) and the 2007 Pisco (Mw 7.9) earthquakes. GPS measurements obtained between 1994-2001 by IGP-CIW an University of Miami-RSMAS on the central Andes of Peru and Bolivia were used to estimate their coseismic displacements and late stage of interseismic strain accumulation. However, we focus our interest in central Peru-Lima region, which with its about 9’000,000 inhabitants is located over a locked plate interface that has not broken with magnitude Mw 8 earthquakes since May 1940, September 1966 and October 1974. We use a network of 11 GPS monuments to estimate the interseismic velocity field, infer spatial variations of interplate coupling and its relation with the background seismicity of the region.

  4. Investigating viscoelastic postseismic deformation due to large earthquakes in East Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    Sunbul, Fatih; Nalbant, Suleyman S.; Simão, Nuno M.; Steacy, Sandy

    2016-03-01

    We investigate the postseismic viscoelastic flow in the lower crust and upper mantle due to the 19th and 20th century large earthquakes in eastern Turkey. Three possible rheological models are used in the viscoelastic postseismic deformation analysis to assess the extent to which these events influence the velocity fields at GPS sites in the region. Our models show that the postseismic signal currently contributes to the observed deformation in the eastern part of the North Anatolian fault and northern and middle parts of the East Anatolian Fault Zone, primarily due to the long-lasting effect of the Ms 7.9 1939 earthquake. None of the postseismic displacement generated by the Ms 7.5 1822 earthquake, which is the earliest and the second largest event in the calculations, exceeds observed error range at the GPS stations. Our results demonstrate that a postseismic signal can be identified in the region and could contribute up to 3-25% of the observed GPS measurements.

  5. Estimating high frequency energy radiation of large earthquakes by image deconvolution back-projection

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2016-09-01

    High frequency energy radiation of large earthquakes is a key to evaluating shaking damage and is an important source characteristic for understanding rupture dynamics. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to retrieve high frequency energy radiation of seismic sources by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a point source. The array response that spreads energy both in space and time is evaluated by using data of a smaller reference earthquake that can be assumed to be a point source. The synthetic test of the method shows that the spatial and temporal resolution of the source is much better than that for the conventional back-projection method. We applied this new method to the 2001 Mw 7.8 Kunlun earthquake using data recorded by Hi-net in Japan. The new method resolves a sharp image of the high frequency energy radiation with a significant portion of supershear rupture.

  6. Why the 2014 Ludian, Yunnan, China Ms 6.5 earthquake triggered an unusually large landslide?

    NASA Astrophysics Data System (ADS)

    Chang, Z. F.; Chen, X. L.; An, X. W.; Cui, J. W.

    2015-01-01

    The 3 August 2014 Ludian, China Ms 6.5 earthquake has spawned a mass of severe landslides. Of them the biggest occurred at Hongshiyan near the epicenter, which has 1200 × 104 m3, clogging the Niulanjiang River, and creating a large dammed lake. Post-event field investigations yield detailed data on following aspects: rock structure of landslides, lithology, and geometry of the dam, composition and grain sizes of debris avalanches. Based on these data, this work further analyzes the geology and topography of the Hongshiyan area, and explores the mechanism for occurrence of such an unusual big landslide at this place. Our analysis suggests the following conditions are responsible for this catastrophic event: (1) during the Ms 6.5 earthquake, the special terrain and site conditions led to abnormally strong ground shake. (2) Hongshiyan lies nearby an active fault, where intense crustal deformation resulted in rock fractures and weathering. (3) Intense incision on the river increased topographic relief with steep slopes and scarps. (4) Combined structures, including unloading fissures, high-angle joints and low-angle beds along the river as well as upper-tough and lower-soft structure on the slopes. It is the joint functions of these conditions that triggered such seldom seen landslides during a moderated-sized earthquake.

  7. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site. Final report

    SciTech Connect

    Chang, C.Y.; Mok, C.M.; Power, M.S.

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M{sub L} 4.5 to M{sub L} 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes.

  8. Spontaneous, large stick-slip events in rotary-shear experiments as analogous to earthquake rupture

    NASA Astrophysics Data System (ADS)

    Zu, Ximeng; Reches, Zeev

    2015-04-01

    Experimental stick-slips are commonly envisioned as laboratory analogues of the spontaneous faults slip during natural earthquakes (Brace & Byerlee, 1966). However, typical experimental stick-slips are tiny events of slip distances up to a few tens of microns. To close the gap between such events and natural earthquakes, we develop a new method that produces spontaneous stick-slips with large displacements on our rotary shear apparatus (Reches & Lockner, 2010). In this method, the controlling program continuously calculates the real-time power-density (PD = slip-velocity times shear stress) of the experimental fault. Then, a feedback loop modifies the slip-velocity to match the real-time PD with the requested PD. In this method, the stick-slips occur spontaneously while slip velocity and duration are not controlled by the operator. We present a series of tens stick-slip events along granite and diorite experimental faults with 0.0001-1.3 m of total slip and slip-velocity up to 0.45 m/s. Depending on the magnitude of the requested PD, we recognized three types of events: (1) Stick-slips with a nucleation slip that initiates ~0.1 sec before the main slip which is characterized by temporal increase of shear stress, normal stress, and fault dilation; (2) Events resembling slip-pulse behavior of abrupt acceleration and intense dynamic weakening and subsequent strength recovery; and (3) Small, creep events during quasi-continuous, low- velocity slip with tiny changes of stress and dilation. The energy-displacement catalog of types (1) and (2) events shows good agreement with previous slip-pulse experiments and natural earthquakes (Chang et al., 2012). The present experiments indicate that power-density control is a promising experimental approach for earthquake simulations.

  9. Neotectonic architecture of Taiwan and its implications for future large earthquakes

    NASA Astrophysics Data System (ADS)

    Shyu, J. Bruce H.; Sieh, Kerry; Chen, Yue-Gau; Liu, Char-Shine

    2005-08-01

    The disastrous effects of the 1999 Chi-Chi earthquake in Taiwan demonstrated an urgent need for better knowledge of the island's potential earthquake sources. Toward this end, we have prepared a neotectonic map of Taiwan. The map and related cross sections are based upon structural and geomorphic expression of active faults and folds both in the field and on shaded relief maps prepared from a 40-m resolution digital elevation model, augmented by geodetic and seismologic data. The active tandem suturing and tandem disengagement of a volcanic arc and a continental sliver to and from the Eurasian continental margin have created two neotectonic belts in Taiwan. In the southern part of the orogen both belts are in the final stage of consuming oceanic crust. Collision and suturing occur in the middle part of both belts, and postcollisional collapse and extension dominate the island's northern and northeastern flanks. Both belts consist of several distinct neotectonic domains. Seven domains (Kaoping, Chiayi, Taichung, Miaoli, Hsinchu, Ilan, and Taipei) constitute the western belt, and four domains (Lutao-Lanyu, Taitung, Hualien, and Ryukyu) make up the eastern belt. Each domain is defined by a distinct suite of active structures. For example, the Chelungpu fault (source of the 1999 earthquake) and its western neighbor, the Changhua fault, are the principal components of the Taichung Domain, whereas both its neighboring domains, the Chiayi and Miaoli Domains, are dominated by major blind faults. In most of the domains the size of the principal active fault is large enough to produce future earthquakes with magnitudes in the mid-7 values.

  10. The California Post-Earthquake Information Clearinghouse: A Plan to Learn From the Next Large California Earthquake

    NASA Astrophysics Data System (ADS)

    Loyd, R.; Walter, S.; Fenton, J.; Tubbesing, S.; Greene, M.

    2008-12-01

    In the rush to remove debris after a damaging earthquake, perishable data related to a wide range of impacts on the physical, built and social environments can be lost. The California Post-Earthquake Information Clearinghouse is intended to prevent this data loss by supporting the earth scientists, engineers, and social and policy researchers who will conduct fieldwork in the affected areas in the hours and days following the earthquake to study these effects. First called for by Governor Ronald Reagan following the destructive M6.5 San Fernando earthquake in 1971, the concept of the Clearinghouse has since been incorporated into the response plans of the National Earthquake Hazard Reduction Program (USGS Circular 1242). This presentation is intended to acquaint scientists with the purpose, functions, and services of the Clearinghouse. Typically, the Clearinghouse is set up in the vicinity of the earthquake within 24 hours of the mainshock and is maintained for several days to several weeks. It provides a location where field researchers can assemble to share and discuss their observations, plan and coordinate subsequent field work, and communicate significant findings directly to the emergency responders and to the public through press conferences. As the immediate response effort winds down, the Clearinghouse will ensure that collected data are archived and made available through "lessons learned" reports and publications that follow significant earthquakes. Participants in the quarterly meetings of the Clearinghouse include representatives from state and federal agencies, universities, NGOs and other private groups. Overall management of the Clearinghouse is delegated to the agencies represented by the authors above.

  11. A Record of the in-Lake and Upland Response to Large Earthquakes, Lake Quinault, Washington

    NASA Astrophysics Data System (ADS)

    Leithold, E. L.; Wegmann, K. W.; Bohnenstiehl, D. R.; Smith, S. A.

    2014-12-01

    Lake Quinault, located at the foot of the Olympic Mountains in western Washington, has served as a trap for sediment delivered from the steep, landslide-prone terrain of the Upper Quinault River catchment since its formation between 20,000 and 29,000 years ago. High resolution seismic reflection and sedimentological data reveal a record of both the in-lake and upland response to large earthquakes that have impacted the region during that period. The sedimentary infill of Lake Quinault is dominated by deposition during river floods, which delivered both abundant siliciclastic sediment and plant debris to the lake bottom. Minor episodes of soft-sediment deformation at the lake margins are recorded, and based on a preliminary age model, may be related to known earthquakes, including the well documented 1700 AD Cascadia megathrust event. By far the most dramatic event in the middle-late Holocene record of Lake Quinault, however, is the lateral spreading and degassing of sediments on its gentle western slopes during an event ca. 1300 years ago. Abundant gas chimneys are visible in seismic stratigraphic profiles from this part of the lake. Several of these gas chimneys extend from the limit of seismic penetration at 15-20 m depth in the lake bed upward to the lake bottom where they terminate at mounds with evidence for active venting. Most of the gas chimneys, however, end abruptly around 2.5 m beneath the lake floor and are overlain by parallel, continuous reflectors. Piston cores show soft-sediment deformation at this level, and abrupt shifts in density, magnetic susceptibility, flood layer thickness, particle size, color, and inorganic geochemistry. We interpret these shifts to mark the contact between sediments that experienced shaking and degassing during a strong earthquake event and overlying sediments that have not experienced comparable seismicity. The earthquake evidently strongly affected the Upper Quinault River catchment, causing increased sediment input to

  12. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  13. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  14. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  15. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  16. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  17. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  18. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  19. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  20. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  1. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  2. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  3. Development of a facility for large-scale testing of multistory buildings

    NASA Technical Reports Server (NTRS)

    Abrams, D. P.

    1980-01-01

    Current experimental research pertaining to response of structures subjected to lateral loads such as strong winds or earthquake motions is limited to either tests of single structural components or small scale multistory structures. The feasibility of developing a facility where large scale multistory structures could be loaded to failure is discussed. The test facility would consist of a series of hydraulic actuators mounted on reaction frames currently in use at the Marshall Space Flight Center for structural testing of spacecraft components. The actuators would be controlled from signals computed by an on-line analysis of measured data. This method of loading could be used to simulate inertial forces resisted by a structure behaving in the non linear range of response subjected to motion at the base or to impulses along the height as would occur during a strong earthquake or wind.

  4. Investigating source scaling of earthquake clusters using the TCDP borehole seismometers in Taiwan

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Song, T.; Ma, K.

    2013-12-01

    Earthquake source scaling and its self-similarity has been at the center of understanding earthquake nucleation and growth in the past several decades. Earthquakes clusters are special class of events located in extremely proximity. Not only do they provide the optimum way to separate the effect of path attenuation from earthquake source, their source characteristics and scaling may also offer insights into the stress condition and heterogeneity in frictional property on the fault. Using the Taiwan Chelungpu-fault Drilling Project (TCDP) borehole seismometer array, 287 clusters with a magnitude ranging from Mw 0.5 to Mw 2.0 were identified by cross-correlating 35 months of seismic records. Typically, each cluster has 2-13 events with high waveform similarity (correlation coefficient > 0.8 in three components) and comparable P wave durations. This observation is distinct from observations for other local earthquakes where the P wave duration has a strong dependence on earthquake magnitude, as expected from circular crack model with a constant stress drop. To better understand the source parameters and scaling of these unusual events in the clusters, we calculated the P wave spectra by the multi-taper technique to minimize spectral aliasing and electric noise at 60 Hz. Since these events are extremely close to each other within a given cluster, we take advantage of the spectral ratio method to isolate the source spectral and remove the site effect and path attenuation. We compute spectral ratio for all event pairs within a given cluster and fit it with the theoretical ω-2 source spectrum with γ=2 using the Simplex method. Typically, the P wave quality factor Qp is estimated at about 140, which is about a factor of 2 smaller than regional estimates. Furthermore, approximately 70% of the calculated spectral ratios are not consistent with constant corner frequency across event pairs. To obtain a self-consistent estimate of corner frequency and moment ratio across all

  5. Vertical stress transfer after large subduction zone earthquakes: 2007 Tocopilla /North Chile case study

    NASA Astrophysics Data System (ADS)

    Eggert, S.; Sobiesiak, M.; Victor, P.

    2011-12-01

    Large interplate subduction zone earthquakes occur on fault planes within the seismogenic interface which, in the case of Northern Chile, usually start to break at the down dip end of the coupled interface, propagating towards the trench. Although the rupture is a horizontally oriented process, some vertical connectivity between the interface and the upper crust should be expected. We study two clusters of aftershock seismicity from the Mw 7.7, 2007, Tocopilla earthquake in Northern Chile Both clusters seem to align along vertical profiles in the upper crust above the main shock rupture plane. The first cluster has a rather dissipative character at the up-dip limit of the rupture plane in the off-shore area around the Peninsula of Mejillones. It developed in the early stage of the aftershock sequence. The second cluster lies above the pronounced aftershock sequence of a secondary large Mw 6.9 slab-push event on 16th of December 2007. This type of compressional event can occur after large thrust earthquakes. A comparison of the epicentral distribution of the crustal events belonging to the aftershock sequence suggests a possible relation to the Cerro Fortuna Fault in the Coastal Cordillera which is a subsidiary fault strand of the major Atacama Fault Zone. We compute the Coulomb stress change on the respective faults of both clusters analyzed to see where slip is promoted or inhibited due to the slip on the subduction interface. We then combine these results with the spatial and temporal aftershock distribution, focal mechanism solutions, b-value mappings and geological evidences to understand the process behind the ascending seismicity clusters and their relation to the main shock of the major Tocopilla event.

  6. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    NASA Astrophysics Data System (ADS)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  7. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  8. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  9. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  10. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  11. On the generation of large amplitude spiky solitons by ultralow frequency earthquake emission in the Van Allen radiation belt

    SciTech Connect

    Mofiz, U. A.

    2006-08-15

    The parametric coupling between earthquake emitted circularly polarized electromagnetic radiation and ponderomotively driven ion-acoustic perturbations in the Van Allen radiation belt is considered. A cubic nonlinear Schroedinger equation for the modulated radiation envelope is derived, and then solved analytically. For ultralow frequency earthquake emissions large amplitude spiky supersonic bright solitons or subsonic dark solitons are found to be generated in the Van Allen radiation belt, detection of which can be a tool for the prediction of a massive earthquake may be followed later.

  12. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  13. Forecasting earthquake-induced landslides at the territorial scale by means of PBEE approaches

    NASA Astrophysics Data System (ADS)

    Berni, N.; Fanelli, G.; Ponziani, F.; Salciarini, D.; Stelluti, M.; Tamagnini, C.

    2012-04-01

    Models for predicting earthquake-induced landslide susceptibility on a regional scale are the main tools used by the Civil Protection Agencies to issue warning alarms after seismic events and to evaluate possible seismic hazard conditions for different earthquake scenarios. We present a model for susceptibility analysis based on a deterministic approach that subdivides the study area in a finite number of cells, assumes for each cell a simplified infinite slope model and considers the earthquake shaking as the landslide triggering factor. In this case, the stability conditions of the slopes are related both to the slope features (in terms of mechanical properties, geometrical and topographical settings and pore pressure regime) and to the earthquake characteristics (in terms of intensity, duration and frequency). Therefore, for a territorial analysis, the proposed method determines the limit conditions of the slope, given the seismic input, soil strength parameters, slope and depth of slip surface, and groundwater conditions for every cell in the study area. The procedure is ideally suited for the implementation on a GIS platform, in which the relevant information are stored for each cell. The seismic response of the slopes is analyzed by means of the Newmark's permanent displacement method. In Newmark's approach, seismic slope stability is measured in terms of the ratio of accumulated permanent displacement during the earthquake and the maximum allowable one, depending - in principle - on the definition of tolerable damage level. The computed permanent displacement depends critically on the actual slope stability conditions, quantified by the critical acceleration, i.e., the seismic acceleration bringing the slope to a state of (instantaneous) limit equilibrium. This methodology is applied in a study of shallow earthquake-induced landslides in central Italy. The triggering seismic input is defined in terms of synthetic accelerograms, constructed from the response

  14. Evidence for Holocene Slip and Large Earthquakes on the Yammouneh Fault (Lebanon)

    NASA Astrophysics Data System (ADS)

    Daëron, M.; Tapponnier, P.; Jacques, E.; Elias, A.; King, G.; Sursock, A.; Gèze, R.; Charbel, A.

    2001-12-01

    Throughout history, the eastern shore of the Mediterranean has been repeatedly shaken by large earthquakes, but no such earthquake has occurred in the past 165 years. Although the most active seismogenic structure of the region is the Levant fault system (LFS), which forms the boundary between the African and Arabian plates, there is still no consensus on its present slip rate, its segmentation, the exact size of the earthquakes it can generate, and their recurrence times. The N-S trending, left lateral Yammouneh fault is generally considered to be the main active strand of the LFS in Lebanon, although it has recently been suggested that most of the slip on the LFS occurs on the Roum fault, and that the Yammouneh strand is inactive\\textsuperscript{(1)}. To address such issues, we have undertaken a systematic study of geomorphic traces of Holocene faulting in Lebanon. Along the Yammouneh fault, stream channels and alluvial fans appear to have been offset by 40 to 80 m in the last 10--14 ka. The smallest visible offsets range from 5 to 10 m, and were probably caused by one or two earthquakes. We also excavated a trench across the Yammouneh fault, where it cuts the Yammouneh pull-apart basin. Up to five seismic breaks can be observed in the upper 5 m of the trench, confirming that this segment of the LFS is very active. The most recent break reaches 60 cm below ground, and the oldest one is found at a depth of 3--4 m. These breaks are different in both size and style, with varying components of dip-slip, and may involve earthquakes with different rupture parameters and magnitudes. The disturbed sediments are finely laminated lacustrine marls and clays, rich with many fragments of charcoal and shells, which we collected for dating. More dating is in progress in a deeper (9 m) trench, to assess the record of climate change in the basin sequence. \\footnotesize{(1) Butler et al., 1997, Transcurrent fault activity on the Dead Sea Transform in Lebanon and its implications

  15. Testing a time-domain regional momtent tensor inversion program for large worldwide earthquakes

    NASA Astrophysics Data System (ADS)

    Richter, G.; Hoffmann, M.; Hanka, W.; Saul, J.

    2009-04-01

    After gaining an accurate source location and magnitude estimate of large earthquakes the direction of plate movement is the next important information for reliable hazard assessment. For this purpose rapid moment tensor inversions are necessary. In this study the time-domain moment tensor inversion program by Dreger (2001) is tested. This program for regional moment tensor solutions is applied to seismic data from regional stations of the GEOFON net and international cooperating partner networks (InaTEWS, IRIS, GEOFON Extended Virtual Network) to obtain moment tensor solutions for large earthquakes worldwide. The motivation of the study is to have rapid information on the plate motion direction for the verification of tsunami generation hazard by earthquakes. A special interest lies on the application in the Indonesian archipelago to integrate the program in German-Indonesian Tsunami Early Warning System (GITEWS). Performing the inversion on a single CPU of a normal PC most solutions are achieved within half an hour after origin time. The program starts automatically for large earthquakes detected by the seismic analysis tool SeisComP3 (Hanka et al, 2008). The data from seismic stations in the distance range up to 2000 km are selected, prepared and quality controlled. First the program searches the best automatic solution by varying the source depth. Testing different stations combinations for the inversion enables to identify the stability of the solution. For further optimization of the solution the interactive selection of available stations is facilitated. The results of over 200 events are compared to centroid moment tensor solutions from the Global CMT-Project, from MedNet/INGV and NEID to evaluate the accuracy of the results. The inversion in the time-domain is sensitive to uncertainties in the velocity model and in the source location. These resolution limits are visible in the waveform fits. Another reason for misfits are strong structural inhomogeneities

  16. Multi-Scale Structure and Earthquake Properties in the San Jacinto Fault Zone Area

    NASA Astrophysics Data System (ADS)

    Ben-Zion, Y.

    2014-12-01

    I review multi-scale multi-signal seismological results on structure and earthquake properties within and around the San Jacinto Fault Zone (SJFZ) in southern California. The results are based on data of the southern California and ANZA networks covering scales from a few km to over 100 km, additional near-fault seismometers and linear arrays with instrument spacing 25-50 m that cross the SJFZ at several locations, and a dense rectangular array with >1100 vertical-component nodes separated by 10-30 m centered on the fault. The structural studies utilize earthquake data to image the seismogenic sections and ambient noise to image the shallower structures. The earthquake studies use waveform inversions and additional time domain and spectral methods. We observe pronounced damage regions with low seismic velocities and anomalous Vp/Vs ratios around the fault, and clear velocity contrasts across various sections. The damage zones and velocity contrasts produce fault zone trapped and head waves at various locations, along with time delays, anisotropy and other signals. The damage zones follow a flower-shape with depth; in places with velocity contrast they are offset to the stiffer side at depth as expected for bimaterial ruptures with persistent propagation direction. Analysis of PGV and PGA indicates clear persistent directivity at given fault sections and overall motion amplification within several km around the fault. Clear temporal changes of velocities, probably involving primarily the shallow material, are observed in response to seasonal, earthquake and other loadings. Full source tensor properties of M>4 earthquakes in the complex trifurcation area include statistically-robust small isotropic component, likely reflecting dynamic generation of rock damage in the source volumes. The dense fault zone instruments record seismic "noise" at frequencies >200 Hz that can be used for imaging and monitoring the shallow material with high space and time details, and

  17. Large submarine earthquakes occurred worldwide, 1 year period (June 2013 to June 2014), - contribution to the understanding of tsunamigenic potential

    NASA Astrophysics Data System (ADS)

    Omira, R.; Vales, D.; Marreiros, C.; Carrilho, F.

    2015-03-01

    This paper is a contribution to a better understanding of tsunamigenic potential from large submarine earthquakes. Here, we analyse the tsunamigenic potential of large earthquakes occurred worldwide with magnitudes around Mw 7.0 and greater, during a period of 1 year, from June 2013 to June 2014. The analysis involves earthquake model evaluation, tsunami numerical modelling, and sensors' records analysis in order to confirm the generation or not of a tsunami following the occurrence of an earthquake. We also investigate and discuss the sensitivity of tsunami generation to the earthquake parameters recognized to control the tsunami occurrence, including the earthquake magnitude, focal mechanism and fault rupture depth. A total of 23 events, with magnitudes ranging from Mw 6.7 to Mw 8.1 and hypocenter depths varying from 10 up to 585 km, have been analyzed in this study. Among them, 52% are thrust faults, 35% are strike-slip faults, and 13% are normal faults. Most analyzed events have been occurred in the Pacific Ocean. This study shows that about 39% of the analyzed earthquakes caused tsunamis that were recorded by different sensors with wave amplitudes varying from few centimetres to about 2 m. Some of them caused inundations of low-lying coastal areas and significant damages in harbours. On the other hand, tsunami numerical modeling shows that some of the events, considered as non-tsunamigenic, might trigger small tsunamis that were not recorded due to the absence of sensors in the near-field areas. We also find that the tsunami generation is mainly dependent of the earthquake focal mechanism and other parameters such as the earthquake hypocenter depth and the magnitude. The results of this study can help on the compilation of tsunami catalogs.

  18. Hidden Earthquakes.

    ERIC Educational Resources Information Center

    Stein, Ross S.; Yeats, Robert S.

    1989-01-01

    Points out that large earthquakes can take place not only on faults that cut the earth's surface but also on blind faults under folded terrain. Describes four examples of fold earthquakes. Discusses the fold earthquakes using several diagrams and pictures. (YP)

  19. The Richter scale: its development and use for determining earthquake source parameters

    USGS Publications Warehouse

    Boore, D.M.

    1989-01-01

    The ML scale, introduced by Richter in 1935, is the antecedent of every magnitude scale in use today. The scale is defined such that a magnitude-3 earthquake recorded on a Wood-Anderson torsion seismometer at a distance of 100 km would write a record with a peak excursion of 1 mm. To be useful, some means are needed to correct recordings to the standard distance of 100 km. Richter provides a table of correction values, which he terms -log Ao, the latest of which is contained in his 1958 textbook. A new analysis of over 9000 readings from almost 1000 earthquakes in the southern California region was recently completed to redetermine the -log Ao values. Although some systematic differences were found between this analysis and Richter's values (such that using Richter's values would lead to underand overestimates of ML at distances less than 40 km and greater than 200 km, respectively), the accuracy of his values is remarkable in view of the small number of data used in their determination. Richter's corrections for the distance attenuation of the peak amplitudes on Wood-Anderson seismographs apply only to the southern California region, of course, and should not be used in other areas without first checking to make sure that they are applicable. Often in the past this has not been done, but recently a number of papers have been published determining the corrections for other areas. If there are significant differences in the attenuation within 100 km between regions, then the definition of the magnitude at 100 km could lead to difficulty in comparing the sizes of earthquakes in various parts of the world. To alleviate this, it is proposed that the scale be defined such that a magnitude 3 corresponds to 10 mm of motion at 17 km. This is consistent both with Richter's definition of ML at 100 km and with the newly determined distance corrections in the southern California region. Aside from the obvious (and original) use as a means of cataloguing earthquakes according

  20. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  1. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  2. Normalized rupture potential for small and large earthquakes along the Pacific Plate off Japan

    NASA Astrophysics Data System (ADS)

    Tormann, Thessa; Wiemer, Stefan; Enescu, Bogdan; Woessner, Jochen

    2016-07-01

    We combine temporal variability in local seismic activity rates and size distributions to estimate the evolution of a Gutenberg-Richter-based metric, the normalized rupture potential (NRP), comparing differences between smaller and larger earthquakes. For the Pacific Plate off Japan, we study both complex spatial patterns and how they evolve over the last 18 years, and more detailed temporal characteristics in a simplified spatial selection, i.e., inside and outside the high-slip zone of the 2011 M9 Tohoku earthquake. We resolve significant changes, in particular an immediate NRP increase for large events prior to the Tohoku event in the subsequent high-slip patch, a very rapid decrease inside this high-stress-release area coupled with a lasting increase of NRP in the immediate surroundings. Even in the center of the Tohoku rupture, the NRP for large magnitudes has not dropped below the 12 year average and is not significantly different from conditions a decade before the M9 event.

  3. Large Earthquakes at the Ibero-Maghrebian Region: Basis for an EEWS

    NASA Astrophysics Data System (ADS)

    Buforn, Elisa; Udías, Agustín; Pro, Carmen

    2015-09-01

    Large earthquakes (Mw > 6, Imax > VIII) occur at the Ibero-Maghrebian region, extending from a point (12ºW) southwest of Cape St. Vincent to Tunisia, with different characteristics depending on their location, which cause considerable damage and casualties. Seismic activity at this region is associated with the boundary between the lithospheric plates of Eurasia and Africa, which extends from the Azores Islands to Tunisia. The boundary at Cape St. Vincent, which has a clear oceanic nature in the westernmost part, experiences a transition from an oceanic to a continental boundary, with the interaction of the southern border of the Iberian Peninsula, the northern border of Africa, and the Alboran basin between them, corresponding to a wide area of deformation. Further to the east, the plate boundary recovers its oceanic nature following the northern coast of Algeria and Tunisia. The region has been divided into four zones with different seismic characteristics. From west to east, large earthquake occurrence, focal depth, total seismic moment tensor, and average seismic slip velocities for each zone along the region show the differences in seismic release of deformation. This must be taken into account in developing an EEWS for the region.

  4. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  5. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  6. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  7. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  8. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  9. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  10. Lotung large-scale seismic test strong motion records. Volume 1, General description: Final report

    SciTech Connect

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  11. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  12. Characterizing Mega-Earthquake Related Tsunami on Subduction Zones without Large Historical Events

    NASA Astrophysics Data System (ADS)

    Williams, C. R.; Lee, R.; Astill, S.; Farahani, R.; Wilson, P. S.; Mohammed, F.

    2014-12-01

    Due to recent large tsunami events (e.g., Chile 2010 and Japan 2011), the insurance industry is very aware of the importance of managing its exposure to tsunami risk. There are currently few tools available to help establish policies for managing and pricing tsunami risk globally. As a starting point and to help address this issue, Risk Management Solutions Inc. (RMS) is developing a global suite of tsunami inundation footprints. This dataset will include both representations of historical events as well as a series of M9 scenarios on subductions zones that have not historical generated mega earthquakes. The latter set is included to address concerns about the completeness of the historical record for mega earthquakes. This concern stems from the fact that the Tohoku Japan earthquake was considerably larger than had been observed in the historical record. Characterizing the source and rupture pattern for the subduction zones without historical events is a poorly constrained process. In many case, the subduction zones can be segmented based on changes in the characteristics of the subducting slab or major ridge systems. For this project, the unit sources from the NOAA propagation database are utilized to leverage the basin wide modeling included in this dataset. The length of the rupture is characterized based on subduction zone segmentation and the slip per unit source can be determined based on the event magnitude (i.e., M9) and moment balancing. As these events have not occurred historically, there is little to constrain the slip distribution. Sensitivity tests on the potential rupture pattern have been undertaken comparing uniform slip to higher shallow slip and tapered slip models. Subduction zones examined include the Makran Trench, the Lesser Antilles and the Hikurangi Trench. The ultimate goal is to create a series of tsunami footprints to help insurers understand their exposures at risk to tsunami inundation around the world.

  13. Resolution and Trade-offs in Finite Fault Inversions for Large Earthquakes Using Teleseismic Signals (Invited)

    NASA Astrophysics Data System (ADS)

    Lay, T.; Ammon, C. J.

    2010-12-01

    An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.

  14. Structural Architecture of the Western Transverse Ranges and Potential for Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Levy, Y.; Rockwell, T. K.; Driscoll, N. W.; Shaw, J. H.; Kent, G. M.; Ucarkus, G.

    2015-12-01

    Understanding the subsurface structure of the Western Transverse Ranges (WTR) is critical to assess the seismic potential of large thrust faults comprising this fold-and-thrust belt. Several models have been advanced over the years, building on new data and understandings of thrust belt architecture, but none of these efforts have incorporated the full range of data, including style and rates of late Quaternary deformation in conjunction with surface geology, sub-surface well data and offshore seismic data. In our models, we suggest that the nearly continuous backbone with continuous stratigraphy of the Santa Ynez Mountains is explained by a large anticlinorium over a deep structural ramp, and that the current thrust front is defined by the southward-vergent Pitas Point-Ventura fault. The Ventura Avenue anticline and trend is an actively deforming fault propagation fold over the partially blind Pitas Point-Ventura fault. Details of how this fault is resolved to the surface are not well constrained, but any deformation model must account for the several back-thrusts that ride in the hanging wall of the thrust sheet, as well as the localized subsidence in Carpenteria and offshore Santa Barbara. Our preliminary starting model is a modification of a recently published model that invokes ramp-flat structure, with a deep ramp under the Santa Ynez Mountains, a shallower "flat" with considerable complexity in the hanging wall and a frontal ramp comprising the San Cayetano and Pitas Point thrusts. With the inferred deep ramp under the Santa Ynez Range, this model implies that large earthquakes may extend the entire length of the anticlinorium from Point Conception to eastern Ventura Basin, suggesting that the potential for a large earthquake is significantly higher then previously assumed.

  15. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  16. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  17. Modeling Recent Large Earthquakes Using the 3-D Global Wave Field

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, V.; Kanamori, H.; Tromp, J.

    2003-04-01

    We use the spectral-element method (SEM) to accurately compute waveforms at periods of 40 s and longer for three recent large earthquakes using 3D Earth models and finite source models. The M_w~7.6, Jan~26, 2001, Bhuj, India event had a small rupture area and is well modeled at long periods with a point source. We use this event as a calibration event to investigate the effects of 3-D Earth models on the waveforms. The M_w~7.9, Nov~11, 2001, Kunlun, China, event exhibits a large directivity (an asymmetry in the radiation pattern) even at periods longer than 200~s. We used the source time function determined by Kikuchi and Yamanaka (2001) and the overall pattern of slip distribution determined by Lin et al. to guide the wave-form modeling. The large directivity is consistent with a long fault, at least 300 km, and an average rupture speed of 3±0.3~km/s. The directivity at long periods is not sensitive to variations in the rupture speed along strike as long as the average rupture speed is constant. Thus, local variations in rupture speed cannot be ruled out. The rupture speed is a key parameter for estimating the fracture energy of earthquakes. The M_w~8.1, March~25, 1998, event near the Balleny Islands on the Antarctic Plate exhibits large directivity in long period surface waves, similar to the Kunlun event. Many slip models have been obtained from body waves for this earthquake (Kuge et al. (1999), Nettles et al. (1999), Antolik et al. (2000), Henry et al. (2000) and Tsuboi et al. (2000)). We used the slip model from Henry et al. to compute SEM waveforms for this event. The synthetic waveforms show a good fit to the data at periods from 40-200~s, but the amplitude and directivity at longer periods are significantly smaller than observed. Henry et al. suggest that this event comprised two subevents with one triggering the other at a distance of 100 km. To explain the observed directivity however, a significant amount of slip is required between the two subevents

  18. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  19. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  20. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  1. The Viscoelastic Effect of Triggered Earthquakes in Various Tectonic Regions On a Global Scale

    NASA Astrophysics Data System (ADS)

    Sunbul, F.

    2015-12-01

    The relation between static stress changes and earthquake triggering has important implications for seismic hazard analysis. Considering long time difference between triggered events, viscoelastic stress transfer plays an important role in stress accumulation along the faults. Developing a better understanding of triggering effects may contribute to improvement of quantification of seismic hazard in tectonically active regions. Parsons (2002) computed the difference between the rate of earthquakes occurring in regions where shear stress increased and those regions where the shear stress decreased on a global scale. He found that 61% of the earthquakes occurred in regions with a shear stress increase, while 39% of events occurred in areas of shear stress decrease. Here, we test whether the inclusion of viscoelastic stress transfer affects the results obtained by Parsons (2002) for static stress transfer. Doing such a systematic analysis, we use Global Centroid Moment Tensor (CMT) catalog selecting 289 Ms>7 main shocks with their ~40.500 aftershocks located in ±2° circles for 5 years periods. For the viscoelastic post seismic calculations, we adapt 12 different published rheological models for 5 different tectonic regions. In order to minimise the uncertainties in this CMT catalog, we use the Frohlich and Davis (1999) statistical approach simultaneously. Our results shows that the 5590 aftershocks are triggered by the 289 Ms>7 earthquakes. 3419 of them are associated with calculated shear stress increase, while 2171 are associated with shear stress decrease. The summation of viscoelastic stress shows that, of the 5840 events, 3530 are associated with shear stress increases, and 2312 with shear stress decrease. This result shows an average 4.5% increase in total, the rate of increase in positive and negative areas are 3.2% and 6.5%, respectively. Therefore, over long time periods viscoelastic relaxation represents a considerable contribution to the total stress on

  2. Large Historical Tsunamigenic Earthquakes in Italy: The Neglected Tsunami Research Point of View

    NASA Astrophysics Data System (ADS)

    Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.

    2015-12-01

    It is known that tsunamis are rather rare events, especially when compared to earthquakes, and the Italian coasts are no exception. Nonetheless, a striking evidence is that 6 out of 10 earthquakes occurred in the last thousand years in Italy, and having equivalent moment magnitude equal or larger than 7 where accompanied by destructive or heavily damaging tsunamis. If we extend the lower limit of the equivalent moment magnitude down to 6.5 the percentage decreases (around 40%), but is still significant. Famous events like those occurred on 30 July 1627 in Gargano, on 11 January 1693 in eastern Sicily, and on 28 December 1908 in the Messina Straits are part of this list: they were all characterized by maximum run-ups of several meters (13 m for the 1908 tsunami), significant maximum inundation distances, and large (although not precisely quantifiable) numbers of victims. Further evidences provided in the last decade by paleo-tsunami deposit analyses help to better characterize the tsunami impact and confirm that none of the cited events can be reduced to local or secondary effects. Proper analysis and simulation of available tsunami data would then appear as an obvious part of the correct definition of the sources responsible for the largest Italian tsunamigenic earthquakes, in a process in which different datasets analyzed by different disciplines must be reconciled rather than put into contrast with each other. Unfortunately, macroseismic, seismic and geological/geomorphological observations and data typically are assigned much heavier weights, and in-land faults are often assigned larger credit than the offshore ones, even when evidence is provided by tsunami simulations that they are not at all capable of justifying the observed tsunami effects. Tsunami generation is imputed a-priori to only supposed, and sometimes even non-existing, submarine landslides. We try to summarize the tsunami research point of view on the largest Italian historical tsunamigenic

  3. Comparative analysis of the tsunami and large earthquake occurrence in the Pacific.

    NASA Astrophysics Data System (ADS)

    Levin, Boris; Sasorova, Elena

    2014-05-01

    The data about tsunami events from 1900 to 2012 with M>=7.5, tsunami intensity I>=1, which have tectonic nature and the validity level V=4 were extracted from two tsunami databases: the Expert Tsunami Data Base for the Pacific (ETDB/PAC), Novosibirsk, Russia (http://tsun.sscc.ru/htdbpac), and Tsunami Event and Runup Database at NOAA www.tsunami.noaa.gov/observations_data. Total number of chosen events was equal to 108. The temporal distributions of the tsunamigenic earthquakes (TEQ) epicenters and the distributions of the energy released by the TEQ were calculated separately for the entire Pacific region, and for the Southern hemisphere (SH), and for the Northern hemisphere (NH) as well as for a number of sub-regions of the Pacific: Japan, Central America, South America, Alaska, the Aleutian arc and the Kuril-Kamchatka arc. Next, we use two subsets of the worldwide NEIC earthquake (EQ) catalog (USGS/NEIC from 1973 up to 2012 and Significant Worldwide Earthquakes (2150 B.C. - 1994 A.D.)). Total number of chosen events was equal to 615. The preliminary standardization of magnitudes was performed. The temporal EQ distributions were calculated separately for the entire Pacific region, and for the SH, and for the NH and for eighteen latitudinal belts: 90°-80°N, 80°-70°N, 70°-60°N, 60°-50°N and so on (the size of each belt is equal to 10°). In both cases (for the seismic events and for the TEQ), the entire observation period was divided into several five-year intervals. We calculated also two-dimensional spatio-temporal distributions of the EQ (TEQ) density and the released energy density. The comparative analysis of the obtained distributions (for the large EQ and for the TEQ) was carried out. It was found that the latitudinal distributions of the energy density for the grate EQ and for the TEQ are completely different. The analysis showed the periodic changing of the seismic activity in different time intervals. According to our estimations the periodic

  4. Large, pre-digital earthquakes of the Bonin-Mariana subduction zone, 1930-1974

    NASA Astrophysics Data System (ADS)

    Okal, Emile A.; Reymond, Dominique; Hongsresawat, Sutatcha

    2013-02-01

    The Bonin-Mariana subduction zone is the end-member example of a decoupled system, as described by Uyeda and Kanamori (1979), with no interplate thrust solutions of moments greater than 8 × 1025 dyn cm known in the CMT catalog, although a number of earthquakes are reported with assigned magnitudes around or above 7, both during the WWSSN period and the historical pre-1962 era. We present a systematic study of these events, including relocation and inversion of moment tensors. We obtain 15 new moment tensor solutions, featuring a wide variety of focal mechanisms both in the fore-arc and the outer rise, and most importantly a shallow-dipping interplate thrust mechanism with a moment of 4 × 1027 dyn cm for the event of 28 December 1940 at a location 175 km East of Pagan. Our results show that the modern CMT catalog still undersamples the seismicity of the Mariana arc, which is thus not immune to relatively large, albeit rare, interplate thrust events, with moments 40 times that of the largest Global-CMT solution. Frequency-magnitude relations would then suggest a return time of 320 years for a magnitude 8 interplate thrust faulting earthquake in the Bonin-Mariana system.

  5. Seismic imaging of structural heterogeneity in Earth's mantle: evidence for large-scale mantle flow.

    PubMed

    Ritsema, J; Van Heijst, H J

    2000-01-01

    Systematic analyses of earthquake-generated seismic waves have resulted in models of three-dimensional elastic wavespeed structure in Earth's mantle. This paper describes the development and the dominant characteristics of one of the most recently developed models. This model is based on seismic wave travel times and wave shapes from over 100,000 ground motion recordings of earthquakes that occurred between 1980 and 1998. It shows signatures of plate tectonic processes to a depth of about 1,200 km in the mantle, and it demonstrates the presence of large-scale structure throughout the lower 2,000 km of the mantle. Seismological analyses make it increasingly more convincing that geologic processes shaping Earth's surface are intimately linked to physical processes in the deep mantle. PMID:11077479

  6. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly

  7. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    NASA Astrophysics Data System (ADS)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  8. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  9. Large-Scale Processing of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Finn, John; Sridhar, K. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1998-01-01

    Scale-up difficulties and high energy costs are two of the more important factors that limit the availability of various types of nanotube carbon. While several approaches are known for producing nanotube carbon, the high-powered reactors typically produce nanotubes at rates measured in only grams per hour and operate at temperatures in excess of 1000 C. These scale-up and energy challenges must be overcome before nanotube carbon can become practical for high-consumption structural and mechanical applications. This presentation examines the issues associated with using various nanotube production methods at larger scales, and discusses research being performed at NASA Ames Research Center on carbon nanotube reactor technology.

  10. Modifications of the ionosphere prior to large earthquakes: report from the Ionosphere Precursor Study Group

    NASA Astrophysics Data System (ADS)

    Oyama, K.-I.; Devi, M.; Ryu, K.; Chen, C. H.; Liu, J. Y.; Liu, H.; Bankov, L.; Kodama, T.

    2016-12-01

    The current status of ionospheric precursor studies associated with large earthquakes (EQ) is summarized in this report. It is a joint endeavor of the "Ionosphere Precursor Study Task Group," which was formed with the support of the Mitsubishi Foundation in 2014-2015. The group promotes the study of ionosphere precursors (IP) to EQs and aims to prepare for a future EQ dedicated satellite constellation, which is essential to obtain the global morphology of IPs and hence demonstrate whether the ionosphere can be used for short-term EQ predictions. Following a review of the recent IP studies, the problems and specific research areas that emerged from the one-year project are described. Planned or launched satellite missions dedicated (or suitable) for EQ studies are also mentioned.

  11. GPS Seismology: Using Precise Point Positioning for Resolving Surface Wave Displacements from Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Dragert, H.; Henton, J. A.; Lahaye, F.; Kouba, J.; Larson, K. M.; Rogers, G. C.

    2010-12-01

    High-rate continuous GPS data can provide direct, high-quality measurements of surface wave displacements generated by large earthquakes (Larson et al., 2003; Bock et al., 2004; Larson, 2009). To achieve high precision, differential positioning is often used in the GPS analysis strategy with distant reference stations held fixed. In this presentation, we examine the use of the Precise Point Positioning (PPP) technique to estimate epoch-by-epoch positions at single stations. Specifically, we use the PPP software developed by Natural Resources Canada (Heroux and Kouba, 2001) to analyze high-rate (5 Hz) GPS data collected at stations of the Plate Boundary Observatory (PBO) in southern California at the time of the M7.2 El Mayor-Cucapah Earthquake of April 4, 2010. The hypocenter for this earthquake was located in northern Baja California, approximately 50 km south of Mexicali on the US-Mexico border, at a depth of ~10 km. Large horizontal displacements were observed at a number of PBO GPS sites, with the largest peak-to-peak displacements exceeding 90 cm in the east-west component for 10-sec period waves observed at El Centro, CA (P496), located about 70 km northeast of the epicenter. The PPP technique clearly resolved surface waves with 1 to 2 cm amplitudes at sites more than 800 km away from the epicenter, illustrating that surface waves eventually reach even distant reference sites within the period of interest and can thereby introduce artifacts for differential GPS positioning. Fine-tuning of PPP methodology revealed the following: 1) Since the quality of a PPP solution will not be optimal until the carrier phase ambiguities have converged (tens of minutes), it is best to begin the analyses well before the arrival of seismic waves. To reduce computations, the data for this convergence period need not be high-rate; 2) The use of 5-second precise satellite clock sampling instead of the nominal 30-second clock sampling minimized clock interpolation errors and

  12. Large scale structure from viscous dark matter

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  13. Quaternary normal faulting in southeastern Sicily (Italy):a seismic source for the 1693 large earthquake

    NASA Astrophysics Data System (ADS)

    Bianca, Marcello; Monaco, Carmelo; Tortorici, Luigi; Cernobori, Licio

    1999-11-01

    We present geological and morphological data, combined with an analysis of seismic reflection lines across the Ionian offshore zone and information on historical earthquakes, in order to yield new constraints on active faulting in southeastern Sicily. This region, one of the most seismically active of the Mediterranean, is affected by WNW-ESE regional extension producing normal faulting of the southern edge of the Siculo-Calabrian rift zone. Our data describe two systems of Quaternary normal faults, characterized by different ages and related to distinct tectonic processes. The older NW-SE-trending normal fault segments developed up to ~400 kyr ago and, striking perpendicular to the main front of the Maghrebian thrust belt, bound the small basins occurring along the eastern coast of the Hyblean Plateau. The younger fault system is represented by prominent NNW-SSE-trending normal fault segments and extends along the Ionian offshore zone following the NE-SW-trending Avola and Rosolini-Ispica normal faults. These faults are characterized by vertical slip rates of 0.7-3.3 mm yr-1 and might be associated with the large seismic events of January 1693. We suggest that the main shock of the January 1693 earthquakes (M~7) could be related to a 45 km long normal fault with a right-lateral component of motion. A long-term net slip rate of about 3.7 mm yr-1 is calculated, and a recurrence interval of about 550+/-50 yr is proposed for large events similar to that of January 1693.

  14. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  15. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  16. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  17. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  18. Missing Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Hough, S. E.; Martin, S.

    2013-12-01

    The occurrence of three earthquakes with Mw greater than 8.8, and six earthquakes larger than Mw8.5, since 2004 has raised interest in the long-term rate of great earthquakes. Past studies have focused on rates since 1900, which roughly marks the start of the instrumental era. Yet substantial information is available for earthquakes prior to 1900. A re-examination of the catalog of global historical earthquakes reveals a paucity of Mw ≥ 8.5 events during the 18th and 19th centuries compared to the rate during the instrumental era (Hough, 2013, JGR), suggesting that the magnitudes of some documented historical earthquakes have been underestimated, with approximately half of all Mw≥8.5 earthquakes missing or underestimated in the 19th century. Very large (Mw≥8.5) magnitudes have traditionally been estimated for historical earthquakes only from tsunami observations given a tautological assumption that all such earthquakes generate significant tsunamis. Magnitudes would therefore tend to be underestimated for deep megathrust earthquakes that generated relatively small tsunamis, deep earthquakes within continental collision zones, earthquakes that produced tsunamis that were not documented, outer rise events, and strike-slip earthquakes such as the 11 April 2012 Sumatra event. We further show that, where magnitudes of historical earthquakes are estimated from earthquake intensities using the Bakun and Wentworth (1997, BSSA) method, magnitudes of great earthquakes can be significantly underestimated. Candidate 'missing' great 19th century earthquakes include the 1843 Lesser Antilles earthquake, which recent studies suggest was significantly larger than initial estimates (Feuillet et al., 2012, JGR; Hough, 2013), and an 1841 Kamchatka event, for which Mw9 was estimated by Gusev and Shumilina (2004, Izv. Phys. Solid Ear.). We consider cumulative moment release rates during the 19th century compared to that during the 20th and 21st centuries, using both the Hough

  19. Development of Multi-Parameter Borehole System to Evaluate the Expected Large Earthquake in the Marmara Sea, Turkey

    NASA Astrophysics Data System (ADS)

    Ozel, Oguz; Guralp, Cansun; Parolai, Stefano; Bouchon, Michel; Karabulut, Hayrullah; Aktar, Mustafa; Meral Ozel, Nurcan

    2014-05-01

    The Istanbul-Marmara region of northwestern Turkey with a population of more than 15 million faces a high probability of being exposed to an hazardous earthquake. The 1999 Izmit earthquake in Turkey is one of the best recorded in the world. For the first time, researchers from CNRS and Kandilli Observatory (Istanbul) observed that the earthquake was preceded by a preparatory phase that lasted 44 minutes before the rupture of the fault. This phase, which was characterized by a distinctive seismic signal, corresponds to slow slip at depth along the fault. Detecting it in other earthquakes might make it possible to predict some types of earthquakes several tens of minutes before fault rupture. In an attempt to understand where and when large earthquakes will occur, and the physics of the source process prior to large earthquakes, we proposed to install multi-parameter borehole instruments in the western part of Marmara Sea in the frame of an EU project called MARSITE. This system and surrounding small-aperture surface array is planned to capable of recording small deformations and tiny seismic signals near the active seismic zone of the North Anatolian Fault passing through the Marmara Sea, which should enable us to address these issues. The objective is to design and build a multi-parameter borehole system for observing slow deformation, low-frequency noise or tremors, and high frequency signals near the epicentral area of the expected Marmara earthquake. Furthermore, it is also aimed to identify the presence of repeating earthquakes and rupture nucleation, to measure continuously the evolution of the state of stress and stress transfer from east to west with high resolution data, and to estimate the near-surface geology effects masking the source related information. The proposed location of the borehole system is right on the Ganos Fault and in a low ambient noise environment in Gazikoy in the western end of the North Anatolian Fault in the Marmara Sea, where the

  20. States of local stresses in the Sea of Marmara through the analysis of large numbers of small earthquakes

    NASA Astrophysics Data System (ADS)

    Korkusuz Öztürk, Yasemin; Meral Özel, Nurcan; Özbakir, Ali Değer

    2015-12-01

    We invert the present day states of stresses for five apparent earthquake clusters in the Northern branch of the North Anatolian Fault in the Sea of Marmara. As the center of the Sea of Marmara is prone to a devastating earthquake within a seismic gap between these selected clusters, sensitive analyses of the understanding of the stress and strain characteristics of the region are all-important. We use high quality P and S phases, and P-wave first motion polarities from 398 earthquakes with ML ≥ 1.5 using at least 10 P-wave first motion polarities (FMPs), and a maximum of 1 inconsistent station, obtained from a total of 105 seismic stations, including 5 continuous OBSs. We report here on large numbers of simultaneously determined individual fault plane solutions (FPSs), and orientations of principal stress axes, which previously have not been determined with any confidence from the basins of the Sea of Marmara and prominent fault branches. We find NE-SW trending transtensional stress structures, predominantly in the earthquake clusters of the Eastern Tekirdağ Basin, Eastern Çınarcık Basin, Yalova and Gemlik areas. We infer that a dextral strike-slip deformation exist in the Eastern Ganos Offshore cluster. Furthermore, we analyze FPSs of four ML ≥ 4.0 earthquakes, occurred in seismically quiet regions after 1999 Izmit earthquake. Stress tensor solutions from a cluster of small events that we have obtained, correlate with FPSs of these moderate size events as a demonstration of the effectiveness of the small earthquakes in the derivation of states of local stresses. Consequently, our analyses of seismicity and large numbers of FPSs using the densest seismic network of Turkey contribute to better understanding of the present states of the stresses and seismotectonics of the Sea of Marmara.

  1. Acoustic Emission Patterns and the Transition to Ductility in Sub-Micron Scale Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Ghaffari, H.; Xia, K.; Young, R.

    2013-12-01

    We report observation of a transition from the brittle to ductile regime in precursor events from different rock materials (Granite, Sandstone, Basalt, and Gypsum) and Polymers (PMMA, PTFE and CR-39). Acoustic emission patterns associated with sub-micron scale laboratory earthquakes are mapped into network parameter spaces (functional damage networks). The sub-classes hold nearly constant timescales, indicating dependency of the sub-phases on the mechanism governing the previous evolutionary phase, i.e., deformation and failure of asperities. Based on our findings, we propose that the signature of the non-linear elastic zone around a crack tip is mapped into the details of the evolutionary phases, supporting the formation of a strongly weak zone in the vicinity of crack tips. Moreover, we recognize sub-micron to micron ruptures with signatures of 'stiffening' in the deformation phase of acoustic-waveforms. We propose that the latter rupture fronts carry critical rupture extensions, including possible dislocations faster than the shear wave speed. Using 'template super-shear waveforms' and their network characteristics, we show that the acoustic emission signals are possible super-shear or intersonic events. Ref. [1] Ghaffari, H. O., and R. P. Young. "Acoustic-Friction Networks and the Evolution of Precursor Rupture Fronts in Laboratory Earthquakes." Nature Scientific reports 3 (2013). [2] Xia, Kaiwen, Ares J. Rosakis, and Hiroo Kanamori. "Laboratory earthquakes: The sub-Rayleigh-to-supershear rupture transition." Science 303.5665 (2004): 1859-1861. [3] Mello, M., et al. "Identifying the unique ground motion signatures of supershear earthquakes: Theory and experiments." Tectonophysics 493.3 (2010): 297-326. [4] Gumbsch, Peter, and Huajian Gao. "Dislocations faster than the speed of sound." Science 283.5404 (1999): 965-968. [5] Livne, Ariel, et al. "The near-tip fields of fast cracks." Science 327.5971 (2010): 1359-1363. [6] Rycroft, Chris H., and Eran Bouchbinder

  2. Practical guidelines to select and scale earthquake records for nonlinear response history analysis of structures

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2010-01-01

    Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  3. Slip in the 1857 and earlier large earthquakes along the Carrizo Plain, San Andreas Fault.

    PubMed

    Zielke, Olaf; Arrowsmith, J Ramón; Grant Ludwig, Lisa; Akçiz, Sinan O

    2010-02-26

    The moment magnitude (Mw) 7.9 Fort Tejon earthquake of 1857, with a approximately 350-kilometer-long surface rupture, was the most recent major earthquake along the south-central San Andreas Fault, California. Based on previous measurements of its surface slip distribution, rupture along the approximately 60-kilometer-long Carrizo segment was thought to control the recurrence of 1857-like earthquakes. New high-resolution topographic data show that the average slip along the Carrizo segment during the 1857 event was 5.3 +/- 1.4 meters, eliminating the core assumption for a linkage between Carrizo segment rupture and recurrence of major earthquakes along the south-central San Andreas Fault. Earthquake slip along the Carrizo segment may recur in earthquake clusters with cumulative slip of approximately 5 meters. PMID:20093436

  4. Real or virtual large-scale structure?

    PubMed Central

    Evrard, August E.

    1999-01-01

    Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own. PMID:10200243

  5. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  6. The large normal-faulting Mariana Earthquake of April 5, 1990 in uncoupled subduction zone

    NASA Astrophysics Data System (ADS)

    Yoshida, Yasuhiro; Satake, Kenji; Abe, Katsuyuki

    1992-02-01

    A large, Ms = 7.5, shallow earthquake occurred beneath the Mariana trench on April 5, 1990. From the relocated aftershock distribution, the fault area is estimated to be 70 × 40 km2. A tsunami observed on the Japanese islands verifies that the depth of the main shock is shallow. For waveform analysis, we use long-period surface waves and body waves recorded at global networks of GDSN, IRIS, GEOSCOPE and ERIOS. The centroid moment tensor (CMT) solution from surface waves indicates normal faulting on a fault whose strike is parallel to the local axis of the Mariana trench, with the tension axis perpendicular to it. The seismic moment is 1.4 × 1020 Nm (× 1027 dyn.cm) which gives Mw = 7.3. Far-field P and SH waves from 13 stations are used to determine the source time function. Since the sea around the epicentral region is about 5 km deep, body waveforms are contaminated with water reverberations. The inversion results in a source time function with a predominantly single event with a duration of 10 sec, a seismic moment of 2.1 × 1020 Nm, and a focal mechanism given by strike = 198°, dip = 48°, slip = 90°. The short duration indicates a small area of the rupture. The location of the main shock with respect to the aftershock area suggests that the nodal plane dipping to the west is preferred for the fault plane. The local stress drop of the single subevent is estimated to be 150 MPa (1.5 Kbars). The Mariana earthquake is considered to have occurred in an uncoupled region, in response to the gravitational pull caused by the downgoing Pacific plate.

  7. Seismic hazard assessment based on the Unified Scaling Law for Earthquakes: the Greater Caucasus

    NASA Astrophysics Data System (ADS)

    Nekrasova, A.; Kossobokov, V. G.

    2015-12-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. The parameters A, B, and C of USLE are used to estimate, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters including macro-seismic intensity. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks (e.g., those based on the density of exposed population). The methodology of seismic hazard and risks assessment based on USLE is illustrated by application to the seismic region of Greater Caucasus.

  8. Light propagation and large-scale inhomogeneities

    SciTech Connect

    Brouzakis, Nikolaos; Tetradis, Nikolaos; Tzavara, Eleftheria E-mail: ntetrad@phys.uoa.gr

    2008-04-15

    We consider the effect on the propagation of light of inhomogeneities with sizes of order 10 Mpc or larger. The Universe is approximated through a variation of the Swiss-cheese model. The spherical inhomogeneities are void-like, with central underdensities surrounded by compensating overdense shells. We study the propagation of light in this background, assuming that the source and the observer occupy random positions, so that each beam travels through several inhomogeneities at random angles. The distribution of luminosity distances for sources with the same redshift is asymmetric, with a peak at a value larger than the average one. The width of the distribution and the location of the maximum increase with increasing redshift and length scale of the inhomogeneities. We compute the induced dispersion and bias of cosmological parameters derived from the supernova data. They are too small to explain the perceived acceleration without dark energy, even when the length scale of the inhomogeneities is comparable to the horizon distance. Moreover, the dispersion and bias induced by gravitational lensing at the scales of galaxies or clusters of galaxies are larger by at least an order of magnitude.

  9. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  10. Combining Seismic Arrays to Image Detailed Rupture Properties of Large Earthquakes: Evidence for Frequent Triggering of Multiple Faults

    NASA Astrophysics Data System (ADS)

    Ishii, M.; Kiser, E.

    2010-12-01

    Imaging detailed rupture characteristics using the back-projection method, which time-reverses waveforms to their source, has become feasible in recent years due to the availability of data from large aperture arrays with dense station coverage. In contrast to conventional techniques, this method can quickly and indiscriminately provide the spatio-temporal details of rupture propagation. Though many studies have utilized the back-projection method with a single regional array, the limited azimuthal coverage often leads to skewed resolution. In this study, we enhance the imaging power by combining data from two arrays, i.e., the Transportable Array (TA) in the United States and the High Sensitivity Seismographic Network (Hi-net) in Japan. This approach suppresses artifacts and achieves good lateral resolution by improving distance and azimuthal coverage while maintaining waveform coherence. We investigate four large events using this method: the August 15, 2007 Pisco, Peru earthquake, the September 12, 2007 Southern Sumatra earthquake, the September 29, 2009 Samoa Islands earthquake, and the February 27, 2010 Maule, Chile earthquake. In every case, except the Samoa Islands event, the distance of one of the arrays from the epicenter requires us to use the direct P wave and core phases in the back-projection. One of the common features of the rupture characteristics obtained from the back-projection analysis is spatio-temporal rupture discontinuities, or discrete subevents. Both the size of the gaps and the timing between subevents suggest that multiple segments are involved during giant earthquakes, and that they trigger slip on other faults. For example, the 2009 Samoa Islands event began with a rupture propagating north for about 15 seconds followed by a much larger rupture that originated 30 km northwest of the terminus of the first event and propagated back toward the southeast. The involvement of multiple rupture segments with different slip characteristics

  11. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  12. Near-Source Recordings of Small and Large Earthquakes: Magnitude Predictability only for Medium and Small Events

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2015-12-01

    The feasibility of Earthquake Early Warning (EEW) applications has revived the discussion on whether earthquake rupture development follows deterministic principles or not. If it does, it may be possible to predict final earthquake magnitudes while the rupture is still developing. EEW magnitude estimation schemes, most of which are based on 3-4 seconds of near-source p-wave data, have been shown to work well for small to moderate size earthquakes. In this magnitude range, the used time window is larger than the source durations of the events. Whether the magnitude estimation schemes also work for events in which the source duration exceeds the estimation time window, however, remains debated. In our study we have compiled an extensive high-quality data set of near-source seismic recordings. We search for waveform features that could be diagnostic of final event magnitudes in a predictive sense. We find that the onsets of large (M7+) events are statistically indistinguishable from those of medium sized events (M5.5-M7). Significant differences arise only once the medium size events terminate. This observation suggests that EEW relevant magnitude estimates are largely observational, rather than predictive, and that whether a medium size event becomes a large one is not determined at the rupture onset. As a consequence, early magnitude estimates for large events are minimum estimates, a fact that has to be taken into account in EEW alert messaging and response design.

  13. Large-Scale Organizational Performance Improvement.

    ERIC Educational Resources Information Center

    Pilotto, Rudy; Young, Jonathan O'Donnell

    1999-01-01

    Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…

  14. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  15. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  16. Millennial-scale record of landslides in the Andes consistent with earthquake trigger

    NASA Astrophysics Data System (ADS)

    McPhillips, Devin; Bierman, Paul R.; Rood, Dylan H.

    2014-12-01

    Geologic records of landslide activity offer rare glimpses into landscapes evolving under the influence of tectonics and climate. Because the deposits of individual landslides are unlikely to be preserved, landslide activity in the geologic past is often reconstructed by extrapolating from historic landslide inventories. Landslide deposits have been interpreted as palaeoclimate proxies relating to changes in precipitation, although earthquakes can also trigger landslides. Here we measure cosmogenic 10Be concentrations in individual cobbles from the modern Quebrada Veladera river channel and an adjacent terrace in Peru and calculate erosion rates. We find, in conjunction with a 10Be production model, that the 10Be concentrations of each cobble population record erosion integrated over thousands of years and are consistent with a landslide origin for the cobbles. The distribution of 10Be concentrations in terrace cobbles produced during the relatively wet climate before about 16,000 years ago is indistinguishable from the distribution in river channel cobbles produced during the drier climate of the past few thousand years. This suggests that the amount of erosion from landslides has not changed in response to climatic changes. Instead, our integrated, millennial-scale record of landslides implies that earthquakes may be the primary landslide trigger in the arid foothills of Peru.

  17. LDRD LW Project Final Report:Resolving the Earthquake Source Scaling Problem

    SciTech Connect

    Mayeda, K; Felker, S; Gok, R; O'Boyle, J; Walter, W R; Ruppert, S

    2004-02-10

    The scaling behavior of basic earthquake source parameters such as the energy release per unit area of fault slip, quantitatively measured as the apparent stress, is currently in dispute. There are compelling studies that show apparent stress is constant over a wide range of moments (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001, Ide et al. 2003). Other equally compelling studies find the apparent stress increases with moment (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001; Richardson and Jordan, 2002). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. As one part of our LDRD project we convened a one-day workshop on July 24, 2003 in Livermore to review the current state of knowledge on this topic and discuss possible methods of resolution with many of the world's foremost experts.

  18. The Unified Scaling Law for Earthquakes in the Friuli Venezia Giulia Region

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Peresan, Antonella; Magrin, Andrea; Kossobokov, Vladimir

    2016-04-01

    The parameters of the Unified Scaling Law for Earthquakes (USLE) in the North Eastern part of Italy, namely in the Friuli Venezia Giulia Region (FVG) and its surroundings, have been studied. For this purpose, the updated and revised bulletins compiled at the National Institute of Oceanography and Experimental Geophysics, Centre of Seismological Research (OGS catalogue) has been used. In particular, we considered all magnitude 2.0 or larger earthquakes, which occurred in the time span 1994-2013 and within the territory of homogeneous completeness identified for the OGS data. The USLE parameters A, B and C have been evaluated at each of about 300 seismically active cells of 1/16°×1/16° size. The parameter A corresponds to the logarithmic estimate of seismic activity at magnitude 3.5, normalized to the unit area of 1°×1° and to the unit time of one year. The obtained values of the parameter A range between -0.9 to 0.2; these values correspond to an average occurrence rate for magnitude 3.5 earthquakes that varies in the range from one event in 8 years to one event every 7.5 months. The values of the coefficient of magnitude balance, parameter B, concentrate in the interval from just above 0.5 to 1.0. The fractal dimension of the earthquake epicenter locus, parameter C, spreads from 0.6 to 1.3. The obtained values of A, B, and C have been used to characterize the seismic hazard and risk for the territory under investigation, based on estimates of N(M) at each of the analysed cells. In fact, it has been shown that long-term estimates of the USLE coefficients permit to define seismic hazard maps in rather traditional terms of maximum expected magnitude, macroseismic intensity or other ground shaking parameters that can be derived from the computed magnitudes. Accordingly, preliminary estimates of the seismic hazard for the FVG region have been computed, at the level of 10% exceedance in 50 years, from the corresponding magnitude assessment based on the USLE. The

  19. Seismic hazard and risks based on the Unified Scaling Law for Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir; Nekrasova, Anastasia

    2014-05-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. Any kind of risk estimates R(g) at location g results from a convolution of the natural hazard H(g) with the exposed object under consideration O(g) along with its vulnerability V(O(g)). Note that g could be a point, or a line, or a cell on or under the Earth surface and that distribution of hazards, as well as objects of concern and their vulnerability, could be time-dependent. There exist many different risk estimates even if the same object of risk and the same hazard are involved. It may result from the different laws of convolution, as well as from different kinds of vulnerability of an object of risk under specific environments and conditions. Both conceptual issues must be resolved in a multidisciplinary problem oriented research performed by specialists in the fields of hazard, objects of risk, and object vulnerability, i.e. specialists in earthquake engineering, social sciences and economics. To illustrate this general concept, we first construct seismic hazard assessment maps based on the Unified Scaling Law for Earthquakes (USLE). The parameters A, B, and C of USLE, i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an area of linear size L, are used to estimate the expected maximum

  20. Simulation of Large-Scale HPC Architectures

    SciTech Connect

    Jones, Ian S; Engelmann, Christian

    2011-01-01

    The Extreme-scale Simulator (xSim) is a recently developed performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads. It allows observing parallel application performance properties in a simulated extreme-scale HPC system to further assist in HPC hardware and application software co-design on the road toward multi-petascale and exascale computing. This paper presents a newly implemented network model for the xSim performance investigation toolkit that is capable of providing simulation support for a variety of HPC network architectures with the appropriate trade-off between simulation scalability and accuracy. The taken approach focuses on a scalable distributed solution with latency and bandwidth restrictions for the simulated network. Different network architectures, such as star, ring, mesh, torus, twisted torus and tree, as well as hierarchical combinations, such as to simulate network-on-chip and network-on-node, are supported. Network traffic congestion modeling is omitted to gain simulation scalability by reducing simulation accuracy.

  1. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use. PMID:24479776

  2. Evidence for large prehistoric earthquakes in the northern New Madrid Seismic Zone, central United States

    USGS Publications Warehouse

    Li, Y.; Schweig, E.S.; Tuttle, M.P.; Ellis, M.A.

    1998-01-01

    We surveyed the area north of New Madris, Missouri, for prehistoric liquefaction deposits and uncovered two new sites with evidence of pre-1811 earthquakes. At one site, located about 20 km northeast of New Madrid, Missouri, radiocarbon dating indicates that an upper sand blow was probably deposited after A.D. 1510 and a lower sand blow was deposited prior to A.D. 1040. A sand blow at another site about 45 km northeast of New Madrid, Missouri, is dated as likely being deposited between A.D.55 and A.D. 1620 and represents the northernmost recognized expression of prehistoric liquefaction likely related to the New Madrid seismic zone. This study, taken together with other data, supports the occurrence of at least two earthquakes strong enough to indcue liquefaction or faulting before A.D. 1811, and after A.D. 400. One earthquake probably occurred around AD 900 and a second earthquake occurred around A.D. 1350. The data are not yet sufficient to estimate the magnitudes of the causative earthquakes for these liquefaction deposits although we conclude that all of the earthquakes are at least moment magnitude M ~6.8, the size of the 1895 Charleston, Missouri, earthquake. A more rigorous estimate of the number and sizes of prehistoric earthquakes in the New Madrid sesmic zone awaits evaluation of additional sites.

  3. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  4. Large scale properties of the Webgraph

    NASA Astrophysics Data System (ADS)

    Donato, D.; Laura, L.; Leonardi, S.; Millozzi, S.

    2004-03-01

    In this paper we present an experimental study of the properties of web graphs. We study a large crawl from 2001 of 200M pages and about 1.4 billion edges made available by the WebBase project at Stanford[CITE]. We report our experimental findings on the topological properties of such graphs, such as the number of bipartite cores and the distribution of degree, PageRank values and strongly connected components.

  5. Infrasonic observations of large scale HE events

    SciTech Connect

    Whitaker, R.W.; Mutschlecner, J.P.; Davidson, M.B.; Noel, S.D.

    1990-01-01

    The Los Alamos Infrasound Program has been operating since about mid-1982, making routine measurements of low frequency atmospheric acoustic propagation. Generally, we work between 0.1 Hz to 10 Hz; however, much of our work is concerned with the narrower range of 0.5 to 5.0 Hz. Two permanent stations, St. George, UT, and Los Alamos, NM, have been operational since 1983, collecting data 24 hours a day. This discussion will concentrate on measurements of large, high explosive (HE) events at ranges of 250 km to 5330 km. Because the equipment is well suited for mobile deployments, it can easily establish temporary observing sites for special events. The measurements in this report are from our permanent sites, as well as from various temporary sites. In this short report will not give detailed data from all sites for all events, but rather will present a few observations that are typical of the full data set. The Defense Nuclear Agency sponsors these large explosive tests as part of their program to study airblast effects. A wide variety of experiments are fielded near the explosive by numerous Department of Defense (DOD) services and agencies. This measurement program is independent of this work; use is made of these tests as energetic known sources, which can be measured at large distances. Ammonium nitrate and fuel oil (ANFO) is the specific explosive used by DNA in these tests. 6 refs., 6 figs.

  6. Vulnerability of Eastern Caribbean Islands Economies to Large Earthquakes: The Trinidad and Tobago Case Study

    NASA Astrophysics Data System (ADS)

    Lynch, L.

    2015-12-01

    The economies of most of the Anglo-phone Eastern Caribbean islands have tripled to quadrupled in size since independence from England. There has also been commensurate growth in human and physical development as indicated by macro-economic indices such as Human Development Index and Fixed Capital Formation. A significant proportion of the accumulated wealth is invested in buildings and infrastructure which are highly susceptible to strong ground motion since the region is located along an active plate boundary. In the case of Trinidad and Tobago, Fixed Capital Formation accumulation since 1980 is almost US200 billion dollars. Recent studies have indicated that this twin island state is at significant risk from several seismic sources, both on land and offshore. To effectively mitigate the risk it is necessary to prescribe long-term measures such as the development and implementation of building code and standards, structural retrofitting, land use planning, preparedness planning and risk transfer mechanisms. The record has shown that Trinidad and Tobago has been been slow in the prescribing such measures which has consequently compounded it vulnerability to large earthquakes. This assessment reveals that the losses from a large (magnitude 7+) on land or an extreme (magnitude 8+) event could result in losses of up to US28B and that current risk transfer measures will only cater for less than ten percent of such losses.

  7. Rare, large earthquakes at the laramide deformation front - Colorado (1882) and Wyoming (1984)

    USGS Publications Warehouse

    Spence, W.; Langer, C.J.; Choy, G.L.

    1996-01-01

    The largest historical earthquake known in Colorado occurred on 7 November 1882. Knowledge of its size, location, and specific tectonic environment is important for the design of critical structures in the rapidly growing region of the Southern Rocky Mountains. More than one century later, on 18 October 1984, an mb 5.3 earthquake occurred in the Laramie Mountains, Wyoming. By studying the 1984 earthquake, we are able to provide constraints on the location and size of the 1882 earthquake. Analysis of broadband seismic data shows the 1984 mainshock to have nucleated at a depth of 27.5 ?? 1.0 km and to have ruptured ???2.7 km updip, with a corresponding average displacement of about 48 cm and average stress drop of about 180 bars. This high stress drop may explain why the earthquake was felt over an area about 3.5 times that expected for a shallow earthquake of the same magnitude in this region. A microearthquake survey shows aftershocks to be just above the mainshock's rupture, mostly in a volume measuring 3 to 4 km across. Focal mechanisms for the mainshock and aftershocks have NE-SW-trending T axes, a feature shared by most earthquakes in western Colorado and by the induced Denver earthquakes of 1967. The only data for the 1882 earthquake were intensity reports from a heterogeneously distributed population. Interpretation of these reports also might be affected by ground-motion amplification from fluvial deposits and possible significant focal depth for the mainshock. The primary aftershock of the 1882 earthquake was felt most strongly in the northern Front Range, leading Kirkham and Rogers (1985) to locate the epicenters of the aftershock and mainshock there. The Front Range is a geomorphic extension of the Laramie Mountains. Both features are part of the eastern deformation front of the Laramide orogeny. Based on knowledge of regional tectonics and using intensity maps for the 1984 and the 1967 Denver earthquakes, we reinterpret prior intensity maps for the 1882

  8. Source processes at the Chilean subduction region: a comparative analysis of recent large earthquakes seismic sequences in Chile

    NASA Astrophysics Data System (ADS)

    Cesca, Simone; Tolga Sen, Ali; Dahm, Torsten

    2016-04-01

    Large intraplate megathrust events are common at the western margin of the Southamerican plate, and repeatedly affected the slab segment along Chile, driven by the subduction of the oceanic Nazca plate, with a convergence of almost 7 cm/y. The size and rate of seismicity, including the 1960 Mw 9.5 Chile earthquake, pose Chile among the most highly seismogenic regions worldwide. At the same time, thanks to the significant national and international effort in recent years, Chile is nowadays seismologically well equipped and monitored; the dense seismological network provides a valuable dataset to analyse details of the rupture processes not only for the main events, but also for weaker seismicity preceding, accompanying and following the largest earthquakes. The seismic sequences accompanying recent large earthquakes showed several differences. In some cases, as for the 2014 Iquique earthquake, an important precursor activity took place in the months preceding the main shock, with an accelerating pattern in the last days before the main shock. In other cases, as for the recent Illapel earthquake, the main shock occurred with few precursors. The 2010 Maule earthquake showed an even different patterns, with the activation of secondary faults after the main shock. Recent studies were able to resolve significant changes in specific source parameters, such as changes in the distribution of focal mechanisms, potentially revealing a rotation of the stress tensor, or a spatial variation of rupture velocity, supporting a depth dependence of the rupture speed. An advanced inversion of seismic source parameters and their combined interpretation for multiple sequences can help to understand the diversity of rupture processes along the Chilean slab, and in general for subduction environments. We combine here results of different recent studies to investigate similarity and anomalies of rupture parameters for different seismic sequences, and foreshocks-aftershocks activities

  9. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    USGS Publications Warehouse

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  10. Ground Penetrating Radar imaging of two large sand blow craters related to the 2001 Bhuj earthquake, Kachchh, Western India

    NASA Astrophysics Data System (ADS)

    Maurya, D. M.; Goyal, B.; Patidar, A. K.; Mulchandani, N.; Thakkar, M. G.; Chamyal, L. S.

    2006-10-01

    The 2001 Bhuj earthquake (Mw 7.7) formed several medium to large sand blow craters due to extensive liquefaction of the sediments comprising the Banni plain and Great Rann of Kachchh. We investigated two large closely spaced sand blow craters of different morphologies using Ground Penetrating Radar (GPR) with a view to understand the subsurface deformation, identify the vents and source of the vented sediments. The study comprises velocity surveys, GPR surveys using 200 MHz antennae along three selected transects that is supplemented by data from two trenches excavated. The GPR was able to provide good data on stratigraphy and deformation up to a depth of 6.5 m with good resolution. The GPR successfully imaged the subsurface characteristics of the craters based on the contrasting lithologies of the host sediments and the sediments emplaced in the craters. The GPR also detected three vertical vents of ˜ 1 m width continuing throughout the profile which are reflected as high amplitude vertical events. We conclude that the large sand blows during the 2001 Bhuj earthquake were produced due to liquefaction of sediments in the subsurface at > 6.5 m depth and that the clay-rich sediments of the Banni plain have behaved as the fine grained cap over it. The present study provides a modern analogue for comparing the liquefaction features of past great earthquakes (for example, the 1819 earthquake) that have occurred in the Kachchh region to understand the phenomena of liquefaction.

  11. Large scale surface heat fluxes. [through oceans

    NASA Technical Reports Server (NTRS)

    Sarachik, E. S.

    1984-01-01

    The heat flux through the ocean surface, Q, is the sum of the net radiation at the surface, the latent heat flux into the atmosphere, and the sensible heat flux into the atmosphere (all fluxes positive upwards). A review is presented of the geographical distribution of Q and its constituents, and the current accuracy of measuring Q by ground based measurements (both directly and by 'bulk formulae') is assessed. The relation of Q to changes of oceanic heat content, heat flux, and SST is examined and for each of these processes, the accuracy needed for Q is discussed. The needed accuracy for Q varies from process to process, varies geographically, and varies with the time and space scale considered.

  12. Scaling relationship between corner frequencies and seismic moments of ultra micro earthquakes estimated with coda-wave spectral ratio -the Mponeng mine in South Africa

    NASA Astrophysics Data System (ADS)

    Wada, N.; Kawakata, H.; Murakami, O.; Doi, I.; Yoshimitsu, N.; Nakatani, M.; Yabe, Y.; Naoi, M. M.; Miyakawa, K.; Miyake, H.; Ide, S.; Igarashi, T.; Morema, G.; Pinder, E.; Ogasawara, H.

    2011-12-01

    Scaling relationship between corner frequencies, fc, and seismic moments, Mo is an important clue to understand the seismic source characteristics. Aki (1967) showed that Mo is proportional to fc-3 for large earthquakes (cubic law). Iio (1986) claimed breakdown of the cubic law between fc and Mo for smaller earthquakes (Mw < 2), and Gibowicz et al. (1991) also showed the breakdown for the ultra micro and small earthquakes (Mw < -2). However, it has been reported that the cubic law holds even for micro earthquakes (-1 < Mw > 4) by using high quality data observed at a deep borehole (Abercrombie, 1995; Ogasawara et al., 2001; Hiramatsu et al., 2002; Yamada et al., 2007). In order to clarify the scaling relationship for smaller earthquakes (Mw < -1), we analyzed ultra micro earthquakes using very high sampling records (48 kHz) of borehole seismometers installed within a hard rock at the Mponeng mine in South Africa. We used 4 tri-axial accelerometers of three-component that have a flat response up to 25 kHz. They were installed to be 10 to 30 meters apart from each other at 3,300 meters deep. During the period from 2008/10/14 to 2008/10/30 (17 days), 8,927 events were recorded. We estimated fc and Mo for 60 events (-3 < Mw < -1) within 200 meters from the seismometers. Assuming the Brune's source model, we estimated fc and Mo from spectral ratios. Common practice is using direct waves from adjacent events. However, there were only 5 event pairs with the distance between them less than 20 meters and Mw difference over one. In addition, the observation array is very small (radius less than 30 m), which means that effects of directivity and radiation pattern on direct waves are similar at all stations. Hence, we used spectral ratio of coda waves, since these effects are averaged and will be effectively reduced (Mayeda et al., 2007; Somei et al., 2010). Coda analysis was attempted only for relatively large 20 events (we call "coda events" hereafter) that have coda energy

  13. Change in paleo-stress state before and after large earthquake, in the Chelung-pu fault, Taiwan

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Kota, T.; Yeh, E. C.; Lin, W.

    2014-12-01

    Stress state close to seismogenic fault is a key parameter to understand earthquake mechanics. Changes in stress state after large earthquakes were documented recently in the 1999 Chi-Chi earthquake, Taiwan, and 2011 Tohoku-Oki earthquake, Northeast Japan. If the temporal changes are common in the past and in the future, the change in paleostress related to large earthquakes are expected to be obtained from micro-faults preserved in outcrops or drilled cores. In this study, we show a change in paleostress from micro-fault slip data observed around the Chelung-pu fault in the Taiwan Chelung-pu fault Drilling Project (TCDP), which is possibly associated with the stress drop by large earthquakes along the Chelung-pu fault. Combining obtained stress orientations, stress ratio and stress polygons, stress magnitude for each stress state and difference in stress magnitude between obtained stresses are estimated. For stress inversion analysis, multiple inversion method (MIM, Yamaji et al., 2000) was carried out. To estimate the centers of clusters automatically, K-means clustering (Otsubo et al., 2006) was conducted on the result of MIM. In the result, four stress states were estimated. The stress states are named C1, C2, C3 and C4 in ascending order of stress ratio (Φ). Stress ratio is defined as (σ1-σ2) / (σ1-σ3). To constraint the stress magnitude, stress polygons are employed combining with the inverted stress states. The principal stress vectors for four stress states (C1-C4) was projected to the SHmax or the Shmin and vertical stress directions. SHmax is larger than Shmin as definition. Stress ratio was estimated by inversion method. Combining those conditions, a linear function in SHmax and Shmin space respected to Sv is obtained from inverted stress states. We obtained two groups of stress state from the slip data in the TCDP core. One stress state has WNW-ESE horizontal sigma1 and larger stress magnitude including reverse fault regime. Another stress state

  14. Evaluating the role of large earthquakes on aquifer dynamics using data fusion and knowledge discovery techniques

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Cox, Simon; Williams, Charles; Holden, Caroline

    2016-04-01

    Artificial adaptive systems are evaluated for their usefulness in modeling earthquake hydrology of the Canterbury region, NZ. For example, an unsupervised machine-learning technique, self-organizing map, is used to fuse about 200 disparate and sparse data variables (such as, well pressure response, ground acceleration, intensity, shaking, stress and strain; aquifer and well characteristics) associated with the M7.1 Darfield earthquake in 2010 and the M6.3 Christchurch earthquake in 2011. The strength of correlations, determined using cross-component plots, varied between earthquakes with pressure changes more strongly related to dynamic- than static stress-related variables during the M7.1 earthquake, and vice versa during the M6.3. The method highlights the importance of data distribution and that driving mechanisms of earthquake-induced pressure change in the aquifers are not straight forward to interpret. In many cases, data mining revealed that confusion and reduction in correlations are associated with multiple trends in the same plot: one for confined and one for unconfined earthquake response. The autocontractive map and minimum spanning tree techniques are used for grouping variables of similar influence on earthquake hydrology. K-means clustering of neural information identified 5 primary regions influenced by the two earthquakes. The application of genetic doping to a genetic algorithm is used for identifying optimal subsets of variables in formulating predictions of well pressures. Predictions of well pressure changes are compared and contrasted using machine-learning network and symbolic regression models with prediction uncertainty quantified using a leave-one-out cross-validation strategy. These preliminary results provide impetus for subsequent analysis with information from another 100 earthquakes that occurred across the South Island.

  15. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  16. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  17. Uplifted marine terraces in Davao Oriental Province, Mindanao Island, Philippines and their implications for large prehistoric offshore earthquakes along the Philippine trench

    NASA Astrophysics Data System (ADS)

    Ramos, Noelynna T.; Tsutsumi, Hiroyuki; Perez, Jeffrey S.; Bermas, Percival P.

    2012-02-01

    We conducted systematic mapping of Holocene marine terraces in eastern Mindanao Island, Philippines for the first time. Raised marine platforms along the 80-km-long coastline of eastern Davao Oriental Province are geomorphic evidence of tectonic deformation resulting from the westward subduction of the Philippine Sea plate along the Philippine trench. Holocene coral platforms consist of up to four terrace steps: T1: 1-5 m, T2: 3-6 m, T3: 6-10 m, and T4: 8-12 m amsl, from the lowest to highest, respectively. Terraces are subhorizontal, exposing cemented coral shingle and eroded coral heads, while terrace risers are 1-3 m high. Radiocarbon ages, 8080-4140 cal yr BP, reveal that erosional surfaces were carved onto the Holocene transgressive reef complex which grew upward until ˜8000 years ago. The maximum uplift rate is ˜1.5 mm/yr based on the highest Holocene terrace at <11.4 m amsl. The staircase topography and meter-scale terrace risers infer that at least four large earthquakes have uplifted the coast in the past ˜8000 years. The deformation pattern of the terraces further suggests that seismic sources are probably located offshore. However, historical earthquakes as large as M W 7.5 along the Philippine trench were not large enough to produce meter-scale coastal uplift, suggesting that much larger earthquakes occurred in the past. A long-term tectonic uplift rate of ˜1.3 mm/yr was also estimated based on Late Pleistocene terraces.

  18. Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.

    2016-05-01

    Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using state-of-the-art techniques covers both the location uncertainty and the location inaccuracy - or bias - problematics. It consists, first, in creating a 3D synthetic seismic cloud of events in the reservoir and calculating the seismic travel times to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1D velocity model uncertainties, a local 3D perturbation of the velocity and a 3D geo-structural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1D velocity model used for the synthetic earthquake relocation. The 3D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a cumulative

  19. Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.

    2016-08-01

    Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy-or bias-problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a

  20. Large-scale GW software development

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab

    Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.

  1. Stochastic pattern transitions in large scale swarms

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  2. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  3. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    NASA Astrophysics Data System (ADS)

    Passarelli, L.; Rivalta, E.; Shuler, A.

    2014-01-01

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process.

  4. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    PubMed Central

    L., Passarelli; E., Rivalta; A., Shuler

    2014-01-01

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process. PMID:24469260

  5. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes.

    PubMed

    Passarelli, L; Rivalta, E; Shuler, A

    2014-01-01

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process. PMID:24469260

  6. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Cowlard, Adam J.; Rouvreau, Sebastien; Toth, Balazs; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  7. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Rouvreau, Sebastien; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant know how about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  8. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  9. Large-Scale Structures of Planetary Systems

    NASA Astrophysics Data System (ADS)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  10. Large-Scale Pattern Discovery in Music

    NASA Astrophysics Data System (ADS)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  11. A recovery of scattering environment in the crust after a large earthquake

    NASA Astrophysics Data System (ADS)

    Sugaya, K.; Hiramatsu, Y.; Furumoto, M.; Katao, H.

    2006-12-01

    A large earthquake generates defects such as small faults and cracks and changes scattering environment in and around its rupture zone by the static or the dynamic stress change. The defects are expected to recover with time. A time constant of the healing of the defects is a key parameter for the recurrence of a large earthquake. Coda waves consist mainly of scattered S-waves. The attenuation property of coda waves, coda Q-1 or Qc-1, reflects the scattering environment in the crust and is considered to be a good indicator of the stress condition in the crust (Aki, 1985, Hiramatsu et al., 2000). In the Tamba region, northeast of the rupture zone of the 1995 Hyogo-ken Nanbu earthquake (MJMA 7.3), Hiramatsu et al. (2000) reported a coseismic increase in Qc-1 at frequencies of 1.5-4 Hz and decrease in b-value due to the static stress change caused by the event. In this study, we investigate the temporal variation in Qc-1 and seismicity from 1997 to 2000 in the Tamba region, following the period of Hiramatsu et al. (2000), to check the recovery of Qc-1 at lower frequencies and the seismicity. We estimate Qc-1 for 10 frequency bands in a range of 1.5-24 Hz based on the single isotropic scattering model (Sato, 1977). We analyze the waveform data of 2812 shallow microearthquakes (M1.5-3) in the region. In order to examine the duration of high Qc-1, we divide the period after the event (1995-2000), including the data reported by Hiramatsu et al. (2000), into two periods of various time windows. The Student's t test confirms that a significant decrease in the mean values of Qc-1 at frequencies of 1.5- 4 Hz at 2-4 years after the event. This indicates that the values of Qc-1 at the lower frequencies return to those before the event for 2-4 years. The mean values of Qc-1 at 3 and 4 Hz, showing the largest significant variation (Hiramatsu et al., 2000), return to those before the event, in particular, for 2 years. There is no tectonic event that causes a stress change at the

  12. On the problem of earthquake correlation in space and time over large distances

    NASA Astrophysics Data System (ADS)

    Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.

    2012-04-01

    A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms

  13. Project Medishare's Historic Haitian Earthquake Response.

    PubMed

    Greig, Elizabeth; Cornely, Cheryl Clark; Green, Barth A

    2015-06-01

    This article describes the immediate large-scale medical and surgical response of Project Medishare to the 2010 Haitian earthquake. It summarizes the rapid evolution of critical care and trauma capacity in a developing nation after earthquake and discusses the transition from acute trauma treatment to interdisciplinary health care sector building. PMID:26080116

  14. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  15. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    EPA Science Inventory

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  16. Using Large-Scale Assessment Scores to Determine Student Grades

    ERIC Educational Resources Information Center

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  17. Demonstration of Mobile Auto-GPS for Large Scale Human Mobility Analysis

    NASA Astrophysics Data System (ADS)

    Horanont, Teerayut; Witayangkurn, Apichon; Shibasaki, Ryosuke

    2013-04-01

    The greater affordability of digital devices and advancement of positioning and tracking capabilities have presided over today's age of geospatial Big Data. Besides, the emergences of massive mobile location data and rapidly increase in computational capabilities open up new opportunities for modeling of large-scale urban dynamics. In this research, we demonstrate the new type of mobile location data called "Auto-GPS" and its potential use cases for urban applications. More than one million Auto-GPS mobile phone users in Japan have been observed nationwide in a completely anonymous form for over an entire year from August 2010 to July 2011 for this analysis. A spate of natural disasters and other emergencies during the past few years has prompted new interest in how mobile location data can help enhance our security, especially in urban areas which are highly vulnerable to these impacts. New insights gleaned from mining the Auto-GPS data suggest a number of promising directions of modeling human movement during a large-scale crisis. We question how people react under critical situation and how their movement changes during severe disasters. Our results demonstrate a case of major earthquake and explain how people who live in Tokyo Metropolitan and vicinity area behave and return home after the Great East Japan Earthquake on March 11, 2011.

  18. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  19. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  20. Large-scale structural monitoring systems

    NASA Astrophysics Data System (ADS)

    Solomon, Ian; Cunnane, James; Stevenson, Paul

    2000-06-01

    Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.

  1. Software for large scale tracking studies

    SciTech Connect

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references.

  2. What caused a large number of fatalities in the Tohoku earthquake?

    NASA Astrophysics Data System (ADS)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  3. Links between small-scale dynamics and large-scale averages and its implication to large-scale hydrology

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2012-04-01

    Changes to the hydrological cycle under a changing climate challenge our understanding of the interaction between hydrology and climate at various spatial and temporal scales. Traditional understanding of the climate-hydrology interaction were developed under a stationary climate and may not adequately summarize the interactions in a transient state when the climate is changing; for instance, opposite long-term temporal trend of precipitation and discharge has been observed in part of the world, as a result of significant warming and the nonlinear nature of the climate and hydrology system. The patterns of internal climate variability, ranging from monthly to multi-centennial time scales, largely determine the past and present climate. The response of these patterns of variability to human-induced climate change will determine much of the regional nature of climate change in the future. Therefore, understanding the basic patterns of variability is of vital importance for climate and hydrological modelers. This work showed that at the scale of large river basins or sub-continents, the temporal variation of climatic variables ranging from daily to inter-annual, could be well represented by multiple sets, each consists of limited number of points (when observations are used) or pixels (when gridded datasets are used), covering a small portion of the total domain area. Combined with hydrological response units, which divide the heterogeneity of the land surface into limited number of categories according to similarity in hydrological behavior, one could describe the climate-hydrology interaction and changes over a large domain with multiple small subsets of the domain area. Those points (when observations are used), or pixels (when gridded data are used), represent different patterns of the climate-hydrology interaction, and contribute uniquely to an averaged dynamic of the entire domain. Statistical methods were developed to identify the minimum number of points or

  4. Coseismic water-level changes in a well induced by teleseismic waves from three large earthquakes

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Fu, Li-Yun; Huang, Fuqiong; Chen, Xuezhong

    2015-05-01

    Three large earthquakes (the 2007 Mw 8.4 Sumatra, the 2008 Mw 7.9 Wenchuan, and the 2011 Mw 9.0 Tohoku) induce coseismic water-level increment at far fields (epicentral distances > 1000 km) in the Fuxin well located in the Fuxin City, northeastern China (the well with the observation of both water levels and volume strains). A comprehensive analysis for the mechanism of far-field coseismic water-level changes is performed by analyzing the in-situ permeability, Skempton's coefficient B, and with the broadband seismograms from a nearby station. We observe an undrained compaction with a decreasing permeability induced by the shaking of teleseismic waves in the far field. Shaking by teleseismic waves can induce compaction or dilatation in the aquifer of Fuxin well; is able to enhance permeability and thus build a new pore-pressure equilibrium system between the Fuxin well and the nearby Sihe reservoir (150 m away from the Fuxin well). The resulting interstitial fluid flow across the region increases coseismic water levels in the aquifer of Fuxin well.

  5. Apparent break in earthquake scaling due to path and site effects on deep borehole recordings

    USGS Publications Warehouse

    Ide, S.; Beroza, G.C.; Prejean, S.G.; Ellsworth, W.L.

    2003-01-01

    We reexamine the scaling of stress drop and apparent stress, rigidity times the ratio between seismically radiated energy to seismic moment, with earthquake size for a set of microearthquakes recorded in a deep borehole in Long Valley, California. In the first set of calculations, we assume a constant Q and solve for the corner frequency and seismic moment. In the second set of calculations, we model the spectral ratio of nearby events to determine the same quantities. We find that the spectral ratio technique, which can account for path and site effects or nonconstant Q, yields higher stress drops, particularly for the smaller events in the data set. The measurements determined from spectral ratios indicate no departure from constant stress drop scaling down to the smallest events in our data set (Mw 0.8). Our results indicate that propagation effects can contaminate measurements of source parameters even in the relatively clean recording environment of a deep borehole, just as they do at the Earth's surface. The scaling of source properties of microearthquakes made from deep borehole recordings may need to be reevaluated.

  6. The Mini-IPIP Scale: psychometric features and relations with PTSD symptoms of Chinese earthquake survivors.

    PubMed

    Li, Zhongquan; Sang, Zhiqin; Wang, Li; Shi, Zhanbiao

    2012-10-01

    The present purpose was to validate the Mini-IPIP scale, a short measure of the five-factor model personality traits, with a sample of Chinese earthquake survivors. A total of 1,563 participants, ages 16 to 85 years, completed the Mini-IPIP scale and a measure of posttraumatic stress disorder (PTSD) symptoms. Confirmatory factor analysis supported the five-factor structure of the Mini-IPIP with adequate values of various fit indices. This scale also showed values of internal consistency, Cronbach's alphas ranged from .79 to .84, and McDonald's omega ranged from .73 to .82 for scores on each subscale. Moreover, the five personality traits measured by the Mini-IPIP and those assessed by other big five measures had comparable patterns of relations with PTSD symptoms. Findings indicated that the Mini-IPIP is an adequate short-form of the Big-Five factors of personality, which is applicable with natural disaster survivors. PMID:23234106

  7. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  8. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  9. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  10. Dynamic scaling and large scale effects in turbulence in compressible stratified fluid

    NASA Astrophysics Data System (ADS)

    Pharasi, Hirdesh K.; Bhattacharjee, Jayanta K.

    2016-01-01

    We consider the propagation of sound in a turbulent fluid which is confined between two horizontal parallel plates, maintained at different temperatures. In the homogeneous fluid, Staroselsky et al. had predicted a divergent sound speed at large length scales. Here we find a divergent sound speed and a vanishing expansion coefficient at large length scales. Dispersion relation and the question of scale invariance at large distance scales lead to these results.

  11. Scale-Dependent Friction and Damage Interface law: implications for effective earthquake rupture dynamics and radiation

    NASA Astrophysics Data System (ADS)

    Festa, Gaetano; Vilotte, Jean-Pierre; Raous, Michel; Henninger, Carole

    2010-05-01

    Propagation and radiation of an earthquake rupture is commonly considered as a friction dominated process on fault surfaces. Friction laws, such as the slip weakening and the rate-and-state laws are widely used in the modeling of the earthquake rupture process. These laws prescribe the traction evolution versus slip, slip rate and potentially other internal variables. They introduce a finite cohesive length scale over which the fracture energy is released. However faults are finite-width interfaces with complex internal structures, characterized by highly damaged zones embedding a very thin principal slip interface where most of the dynamic slip localizes. Even though the rupture process is generally investigated at wavelengths larger than the fault zone thickness, which should justify a formulation based upon surface energy, a consistent homogeneization, a very challenging problem, is still missing. Such homogeneization is however be required to derive the consistent form of an effective interface law, as well as the appropriate physical variables and length scales, to correctly describe the coarse-grained dissipation resulting from surface and volumetric contributions at the scale of the fault zone. In this study, we investigate a scale-dependent law, introduced by Raous et al. (1999) in the context of adhesive material interfaces, that takes into account the transition between a damage dominated and a friction dominated state. Such a phase-field formalism describes this transition through an order parameter. We first compare this law to standard slip weakening friction law in terms of the rupture nucleation. The problem is analyzed through the representation of the solution of the quasi-static elastic problem onto the Chebyshev polynomial basis, generalizing the Uenishi-Rice solution. The nucleation solutions, at the onset of instability, are then introduced as initial conditions for the study of the dynamic rupture propagation, in the case of in-plane rupture

  12. Source Parameters Inversion for Recent Large Undersea Earthquakes from GRACE Data

    NASA Astrophysics Data System (ADS)

    Dai, Chunli

    The north component of gravity and gravity gradient changes from the Gravity Recovery And Climate Experiment (GRACE) are used to study the coseismic gravity change for five earthquakes over the last decade: the 2004 Sumatra-Andaman earthquake, the 2007 Bengkulu earthquake, the 2010 Maule, Chile earthquake, the 2011 Tohoku earthquake, and the 2012 Indian Ocean earthquakes. We demonstrate the advantage of these north components to reduce north-south stripes and preserve higher spatial resolution signal in GRACE Level 2 (L2) monthly Stokes Coefficients data products. By using the high spherical harmonic degree (up to degree 96) data products and the innovative GRACE data processing approach developed in this study, the retrieved gravity change is up to --34+/-1.4 muGal for the 2004 Sumatra and 2005 Nias earthquakes, which is by far the highest coseismic signal retrieved among published studies. Our study reveals the detectability of earthquakes as small as Mw 8.5 (i.e., the 2007 Bengkulu earthquake) from GRACE data. The localized spectral analysis is applied as an efficient method to determine the practical spherical harmonic truncation degree leading to acceptable signal-to-noise ratio, and to evaluate the noise level for each component of gravity and gravity gradient change of the seismic deformations. By establishing the linear algorithm of gravity and gravity gradient change with respect to the double-couple moment tensor, the point source parameters are estimated through the least squares adjustment combined with the simulated annealing algorithm. The GRACE-inverted source parameters generally agree well with the slip models estimated using other data sets, including seismic, GPS, or combined data. For the 2004 Sumatra-Andaman and 2005 Nias earthquakes, GRACE data produce a shallower centroid depth (9.1 km) compared to the depth (28.3 km) from GPS data, which may be explained by the closer-to-trench centroid location and by the aseismic slip over the shallow

  13. Exploring Earthquake Databases for the Creation of Magnitude-Homogeneous Catalogues: Tools for Application on a Regional and Global Scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-06-01

    The creation of a magnitude-homogenised catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenising multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins, and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilise this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonise magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonised into moment-magnitude to form a catalogue of more than 562,840 events. This extended catalogue, whilst not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  14. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  15. Analog earthquakes

    SciTech Connect

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.

  16. Source parameters of large historical (1918-1962) earthquakes, South Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Doser, Diane I.; Webb, Terry H.; Maunder, Diane E.

    1999-12-01

    We present the results of body waveform modelling studies for 17 earthquakes of Mw>=5.7 occurring in the South Island, New Zealand region between 1918 and 1962, including the 1929 Ms=7.8 Buller earthquake, the largest earthquake to have occurred in the South Island this century. These studies confirm the concept of slip partitioning in the northern South Island between strike-slip faulting in southwestern Marlborough and reverse and strike-slip faulting in the Buller region, but indicate that the zone of reverse faulting is quite localized. In the central South Island, all historical earthquakes appear to be associated with strike-slip faulting, although recent (post-1991) reverse faulting events suggest that slip partitioning also occurs within this region. The difference between historical and recent seismicity in the central South Island may also reflect stress readjustment occurring in response to the 1717 ad rupture along the Alpine fault. Within the Fiordland region (southwestern South Island) none of the historical earthquakes appears to have occurred along the Australian/Pacific plate interface, but rather they are associated with complex deformation of the subducting plate as well as with deformation of the upper (Pacific) plate. Two earthquakes in the Puysegur Bank region south of the South Island suggest that strike-slip deformation east of the Puysegur Trench is playing a major role in the tectonics of the region.

  17. Complex Ruptures of Large Earthquakes Imaged by ALOS L-band SAR with Other Geodetic and Seismic Data

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Sladen, A.; Wei, S.; Simons, M.; Avouac, J.; Burgmann, R.

    2011-12-01

    Large earthquakes can have devastating effects if they are located in areas of dense population or cause tsunamis. Understanding which faults have ruptured and how much the faults slipped in major quakes is important for assessing likely damage and estimating the change in the risk of future events on nearby faults. We have studied the fault ruptures of several recent large continental earthquakes, including the 2008 Mw 7.9 in China (Wenchuan earthquake), 2010 Mw 7.0 in Haiti and 2010 Mw 7.2 in Baja California (El Mayor-Cucapah earthquake), using ALOS (Advanced Land Observation Satellite) PALSAR (phased-array L-band synthetic aperture radar) data from the USGRC Data Pool combined with SAR data from other satellites, GPS data and teleseismic waveforms. Joint inversion of geodetic and seismic data resolves both the spatial and temporal distribution of slip on the faults, providing an estimate of the slip evolution during the earthquake. For each of these major earthquakes, we extracted key information with interferometric (InSAR) and pixel tracking analysis of the ALOS L-band SAR data, and we determined that the fault ruptures were more complex than initially assumed. The 12 May 2008 Wenchuan earthquake ruptured several faults, including two sub-parallel faults, with a total length of about 300 km. Surface ruptures were mapped with pixel tracking from the ALOS fine-beam image pairs, later confirmed by field geologists. Interferograms from ALOS fine-beam and wide-beam images provided both ascending and descending coverage of the full deformation field, with additional coverage by Envisat interferograms that were limited by poor coherence in the vegetated mountains. For the 12 January 2010 Haiti earthquake, the L-band interferograms were the main constraints on the location of the main fault that ruptured at depth, a north-dipping oblique-slip fault called the Leogane Fault and not the expected previously mapped Enriquillo Fault. Interferograms from shorter wavelength

  18. Preparation phase and consequences of a large earthquake: insights from foreshocks and aftershocks of the 2014 Mw 8.1 Iquique earthquake, Chile

    NASA Astrophysics Data System (ADS)

    Cesca, Simone; Grigoli, Francesco; Heimann, Sebastian; Dahm, Torsten

    2015-04-01

    The April 1, 2014, Mw 8.1 Iquique earthquake in Northern Chile, was preceded by an anomalous, extensive preparation phase. The precursor seismicity at the ruptured slab segment was observed sporadically several months before the main shock, with a significant increment in seismicity rates and observed magnitudes in the last three weeks before the main shock. The large dataset of regional recordings helped us to investigate the role of such precursor activity, comparing foreshock and aftershock seismicity to test models of rupture preparation and models of strain and stress rotation during an earthquake. We used full waveforms techniques to locate events, map the seismicity rate, derive source parameters, and assess spatiotemporal stress changes. Results indicate that the spatial distributions of foreshocks delineated the shallower part of the rupture areas of the main shock and its largest aftershock, and is well matching the spatial extension of the aftershocks. During the foreshock sequence, seismicity spatially is mainly localized in two clusters, separated by a region of high locking. The ruptures of mainshock and largest aftershock nucleate within these clusters and propagate to the locked region; the aftershocks are again localized in correspondence to the original spatial clusters, and the central region is locked again. More than 300 moment tensor inversions were performed, down to Mw 4.0, most of them corresponding to almost pure double couple thrust mechanisms, with a geometry consistent with the slab orientation. No significant differences are observed among thrust mechanisms in different areas, nor among thrust foreshocks and aftershocks. However, a new family of normal fault mechanisms appears after the main shock, likely affecting the shallow wedge structure in consequence of the increased extensional stress in this region. We infer a stress rotation after the main shock, as proposed for recent larger thrust earthquakes, which suggests that the April

  19. Geomorphological observations of active faults in the epicentral region of the Huaxian large earthquake in 1556 in Shaanxi Province, China

    NASA Astrophysics Data System (ADS)

    Hou, Jian-Jun; Han, Mu-Kang; Chai, Bao-Long; Han, Heng-Yue

    1998-05-01

    The Huaxian magnitude 8 great earthquake of January 23, 1556, is the largest one recorded in the Weihe basin, Shaanxi Province, China and caused 830,000 people either injury or death. The epicenter is located in the southeastern part of the Weihe basin, around Huaxian City. Earthquakes are closely related to active faults and active faults are well developed in the epicentral area of the Huaxian large earthquake. Thus we will discuss the activity of the major faults in the epicentral area by geomorphological observations. There are three major fault sets in the study area: striking approximately east-west, northeast and northwest. These are inhomogeneous in spatial distribution, rates and manners of faulting, as shown by geomorphological observations such as faulted fluvial terraces and alluvial fans. The ages of the second and first terraces are around 20,000 and 5,000 years B.P. by thermoluminescent dating, Carbon-14 dating and archeology. The terraces were faulted by the North Huashan fault (F 1), the main boundary fault of Weihe basin and the Piedmont fault (F 2) after the second and the first terraces formed. The distribution of the displacement shows that the intersections of the North Huashan fault and the Chishui fault (F 4) striking northwest, and the western margin fault (F 5) of Tongguan loess tableland, have the largest in offsets in the area. Perhaps the Huanxian great earthquake in 1556 A.D. had a close relation to the North Huashan fault. The Weihe fault (F 3) striking east-west is also an active fault by analysis of the flood plain structure. Thus we should pay attention to the activities of the faults to take precautions against another possible large earthquake in this region.

  20. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  1. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  2. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  3. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  4. Reassessing the 2006 Guerrero slow-slip event, Mexico: Implications for large earthquakes in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Bekaert, D. P. S.; Hooper, A.; Wright, T. J.

    2015-02-01

    In Guerrero, Mexico, slow-slip events have been observed in a seismic gap, where no earthquakes have occurred since 1911. A rupture of the entire gap today could result in a Mw 8.2-8.4 earthquake. However, it remains unclear how slow-slip events change the stress field in the Guerrero seismic region and what their implications are for devastating earthquakes. Most earlier studies have relied on a sparse network of Global Navigation Satellite Systems measurements. Here we show that interferometric synthetic aperture radar can be used to improve the spatial resolution. We find that slip due to the 2006 slow-slip event enters the seismogenic zone and the Guerrero Gap, with ˜5 cm slip reaching depths as shallow as 12 km. We show that slow slip is correlated with a highly coupled region and estimate that slow-slip events have decreased the total accumulated moment since the end of the 2001/2002 slow-slip event (4.7 years) by ˜50%. Nevertheless, even accounting for slow slip, the moment deficit in the Guerrero Gap increases each year by Mw˜6.8. The Guerrero Gap therefore still has the potential for a large earthquake, with a slip deficit equivalent to Mw˜8.15 accumulated over the last century. Correlation between the slow-slip region and nonvolcanic tremor, and between slow slip and an ultraslow velocity layer, supports the hypothesis of a common source potentially related to high pore pressures.

  5. Evolution of moderate seismicity in the San Francisco Bay region, 1850 to 1993: Seismicity changes related to the occurrence of large and great earthquakes

    NASA Astrophysics Data System (ADS)

    Jaumé, Steven C.; Sykes, Lynn R.

    1996-01-01

    The rate of seismic activity of moderate-size (M > 5.5) earthquakes in the San Francisco Bay (SFB) region has varied considerably during the past 150 years. As measured by the rate of seismic moment release, seismic activity in the SFB region is observed to accelerate prior to M > 7.0 earthquakes in 1868, 1906, and 1989, and then to decelerate following them. We examine these seismicity changes in the context of the evolution of the stress field in the SFB region as a result of strain accumulation and release using a model of dislocations in an elastic halfspace. We use a Coulomb failure function (CFF) to take into account changes in both shear and normal stresses on potential failure planes of varying strike and dip in the SFB region. We find that the occurrence of a large or great earthquake creates a "stress shadow": a region where the stress driving earthquake deformation is decreased. Interseismic strain accumulation acts to reverse this process, gradually bringing faults in the SFB region out of the stress shadow of a previous large or great earthquake and back into a state where earthquake failure is possible. As the stress shadow generated by a large or great earthquake disappears, it migrates inward toward the fault associated with that large or great event. The observed changes in the rate of occurrence of moderate earthquakes in the SFB region are broadly consistent with this model. In detail, the decrease in seismicity throughout most of the SFB region and a localized increase in the Monterey Bay region following the great 1906 earthquake is consistent with our predicted stress changes. The timing and location of moderate-size earthquakes when the rate of seismicity increases again in the 1950s is consistent with areas in which the 1906 stress shadow had been eliminated by strain accumulation in the SFB region. Those earthquakes that are most inconsistent with our stress evolution model, including the 1911 earthquake southeast of San Jose, are found to

  6. Unusual Animal Behavior Preceding the 2011 Earthquake off the Pacific Coast of Tohoku, Japan: A Way to Predict the Approach of Large Earthquakes

    PubMed Central

    Yamauchi, Hiroyuki; Uchiyama, Hidehiko; Ohtani, Nobuyo; Ohta, Mitsuaki

    2014-01-01

    Simple Summary Large earthquakes (EQs) cause severe damage to property and people. They occur abruptly, and it is difficult to predict their time, location, and magnitude. However, there are reports of abnormal changes occurring in various natural systems prior to EQs. Unusual animal behaviors (UABs) are important phenomena. These UABs could be useful for predicting EQs, although their reliability has remained uncertain yet. We report on changes in particular animal species preceding a large EQ to improve the research on predicting EQs. Abstract Unusual animal behaviors (UABs) have been observed before large earthquakes (EQs), however, their mechanisms are unclear. While information on UABs has been gathered after many EQs, few studies have focused on the ratio of emerged UABs or specific behaviors prior to EQs. On 11 March 2011, an EQ (Mw 9.0) occurred in Japan, which took about twenty thousand lives together with missing and killed persons. We surveyed UABs of pets preceding this EQ using a questionnaire. Additionally, we explored whether dairy cow milk yields varied before this EQ in particular locations. In the results, 236 of 1,259 dog owners and 115 of 703 cat owners observed UABs in their pets, with restless behavior being the most prominent change in both species. Most UABs occurred within one day of the EQ. The UABs showed a precursory relationship with epicentral distance. Interestingly, cow milk yields in a milking facility within 340 km of the epicenter decreased significantly about one week before the EQ. However, cows in facilities farther away showed no significant decreases. Since both the pets’ behavior and the dairy cows’ milk yields were affected prior to the EQ, with careful observation they could contribute to EQ predictions. PMID:26480033

  7. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  8. When and where the aftershock activity was depressed: Contrasting decay patterns of the proximate large earthquakes in southern California

    USGS Publications Warehouse

    Ogata, Y.; Jones, L.M.; Toda, S.

    2003-01-01

    Seismic quiescence has attracted attention as a possible precursor to a large earthquake. However, sensitive detection of quiescence requires accurate modeling of normal aftershock activity. We apply the epidemic-type aftershock sequence (ETAS) model that is a natural extension of the modified Omori formula for aftershock decay, allowing further clusters (secondary aftershocks) within an aftershock sequence. The Hector Mine aftershock activity has been normal, relative to the decay predicted by the ETAS model during the 14 months of available data. In contrast, although the aftershock sequence of the 1992 Landers earthquake (M = 7.3), including the 1992 Big Bear earthquake (M = 6.4) and its aftershocks, fits very well to the ETAS up until about 6 months after the main shock, the activity showed clear lowering relative to the modeled rate (relative quiescence) and lasted nearly 7 years, leading up to the Hector Mine earthquake (M = 7.1) in 1999. Specifically, the relative quiescence occurred only in the shallow aftershock activity, down to depths of 5-6 km. The sequence of deeper events showed clear, normal aftershock activity well fitted to the ETAS throughout the whole period. We argue several physical explanations for these results. Among them, we strongly suspect aseismic slips within the Hector Mine rupture source that could inhibit the crustal relaxation process within "shadow zones" of the Coulomb's failure stress change. Furthermore, the aftershock activity of the 1992 Joshua Tree earthquake (M = 6.1) sharply lowered in the same day of the main shock, which can be explained by a similar scenario.

  9. Large-scale convective instability in an electroconducting medium with small-scale helicity

    SciTech Connect

    Kopp, M. I.; Tur, A. V.; Yanovsky, V. V.

    2015-04-15

    A large-scale instability occurring in a stratified conducting medium with small-scale helicity of the velocity field and magnetic fields is detected using an asymptotic many-scale method. Such a helicity is sustained by small external sources for small Reynolds numbers. Two regimes of instability with zero and nonzero frequencies are detected. The criteria for the occurrence of large-scale instability in such a medium are formulated.

  10. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  11. A COMPARATIVE STUDY ON CHARACTERISTICS OF EARTHQUAKES IN 2011 TOHOKU-PACIFIC OCEAN EARTHQUAKE AND 2003 SANRIKU MINAMI EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Nakaaki, Shusuke; Sakai, Kimitoshi; Murono, Yoshitaka

    In the 2001 Tohoku-Pacific Ocean Earthquake (TPO-EQ), damage of railway structures were limited although the scale of earthquake was too large as compared with past large earthquakes. In this paper, relationships between properties of seismic waves and damage of viaducts were investigated. As a comparative example, 2003 Sanriku minami Earthquake(SM-EQ) was chosen, in which damage of viaducts was almost the same with that in TPO-EQ. Analytical results showed that difference of structural response were not significant although scale of earthquake were very different. In addition, it was found that structures located at damaged area observed in TPO-EQ showed 1.1 ~ 1.5 times larger response as compared in SM-EQ at the area, where severe damage was observed in TPO-EQ.

  12. Interpretation of large-scale deviations from the Hubble flow

    <