Science.gov

Sample records for large scale earthquakes

  1. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).

  4. Earthquake triggering and large-scale geologic storage of carbon dioxide.

    PubMed

    Zoback, Mark D; Gorelick, Steven M

    2012-06-26

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO(2) emissions associated with coal-based electrical power generation and other industrial sources of CO(2) [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185-5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO(2) into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO(2) repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions.

  5. Earthquake triggering and large-scale geologic storage of carbon dioxide

    PubMed Central

    Zoback, Mark D.; Gorelick, Steven M.

    2012-01-01

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO2 emissions associated with coal-based electrical power generation and other industrial sources of CO2 [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185–5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO2 into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO2 repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  6. Large scale dynamic rupture scenario of the 2004 Sumatra-Andaman megathrust earthquake

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Madden, Elizabeth H.; Wollherr, Stephanie; Gabriel, Alice A.

    2016-04-01

    The Great Sumatra-Andaman earthquake of 26 December 2004 is one of the strongest and most devastating earthquakes in recent history. Most of the damage and the ~230,000 fatalities were caused by the tsunami generated by the Mw 9.1-9.3 event. Various finite-source models of the earthquake have been proposed, but poor near-field observational coverage has led to distinct differences in source characterization. Even the fault dip angle and depth extent are subject to debate. We present a physically realistic dynamic rupture scenario of the earthquake using state-of-the-art numerical methods and seismotectonic data. Due to the lack of near-field observations, our setup is constrained by the overall characteristics of the rupture, including the magnitude, propagation speed, and extent along strike. In addition, we incorporate the detailed geometry of the subducting fault using Slab1.0 to the south and aftershock locations to the north, combined with high-resolution topography and bathymetry data.The possibility of inhomogeneous background stress, resulting from the curved shape of the slab along strike and the large fault dimensions, is discussed. The possible activation of thrust faults splaying off the megathrust in the vicinity of the hypocenter is also investigated. Dynamic simulation of this 1300 to 1500 km rupture is a computational and geophysical challenge. In addition to capturing the large-scale rupture, the simulation must resolve the process zone at the rupture tip, whose characteristic length is comparable to smaller earthquakes and which shrinks with propagation distance. Thus, the fault must be finely discretised. Moreover, previously published inversions agree on a rupture duration of ~8 to 10 minutes, suggesting an overall slow rupture speed. Hence, both long temporal scales and large spatial dimensions must be captured. We use SeisSol, a software package based on an ADER-DG scheme solving the spontaneous dynamic earthquake rupture problem with high

  7. DYNAMIC BEHAVIOR OF CONCRETE GRAVITY DAM ON JOINTED ROCK FOUNDATION DURING LARGE-SCALE EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Kimata, Hiroyuki; Fujita, Yutaka; Horii, Hideyuki; Yazdani, Mahmoud

    Dynamic cracking analysis of concrete gravity dam has been carried out during large-scale earthquake, considering the progressive failure of jointed rock foundation. Firstly, in order to take into account the progressive failure of rock foundation, the constitutive law of jointed rock is assumed and its validity is evaluated by simulation analysis based on the past experimental model. Finally, dynamic cracking analysis of 100-m high dam model is performed, using the previously proposed approach with tangent stiffness-proportional damping to express the propagation behavior of crack and the constitutive law of jointed rock. The crack propagation behavior of dam body and the progressive failure of jointed rock foundation are investigated.

  8. Large-scale Slow Slip Event Preceding the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Koketsu, K.; Yokota, Y.

    2013-12-01

    We carried out inversions of annual GEONET data (F3 displacements) observed by the Geospatial Information Authority of Japan from the opening of GEONET in 1996 to the 2011 Tohoku earthquake. We then obtained annual backslip (slip deficit) rate distributions, finding that the backslip had weakened and migrated at some time in 2002 or 2003 (Koketsu, Yokota, N. Kato, and T. Kato, 2012 AGU fall meeting). In this study, we are going back to the original GEONET data and examine whether the weakening and migration of backslip included in them were stationary or not. If they are confirmed to be stationary, we can relate them to a large-scale slow slip event. Since similar phenomena occurred from 2001 to 2004 in the Tokai district of Japan, we analyze the original GEONET data using such a method as Ozawa et al. (2002) applied for the Tokai phenomena. However, as the seismicity in the Tohoku and Kanto districts is higher than in the Tokai district, we have corrected the original data in advance by removing not only annual sinusoidal variations but also the effects of M 6 to 8 earthquakes. We first choose four GEONET stations where large weakening is observed, and derive average trends from the corrected data in 1996 to 2001 after we confirm stationary displacement rates during the period. When we subtract the average trends from the corrected data as shown in Fig. 1, we find flat lines up to some time in 2002 or 2003 and then eastward displacements in opposite direction to the backslip. Therefore, we perform regression analyses to obtain an inflection point between the flat lines and eastward displacements. The result indicates the inflection point to be located in May 2002. If the rates of the eastward displacements after May 2002 look stationary, a slow slip event can be considered to occur as in the Tokai district. Their plots actually show stationary rates except for increase at the time of the 2005 Miyagi-oki earthquake. We next perform the same analyses of the

  9. Parallel octree-based multiresolution mesh method for large-scale earthquake ground motion simulation

    NASA Astrophysics Data System (ADS)

    Kim, Eui Joong

    Large scale ground motion simulation requires supercomputing systems in order to obtain reliable and useful results within reasonable elapsed time. In this study, we develop a framework for terascale ground motion simulations in highly heterogeneous basins. As part of the development, we present a parallel octree-based multiresolution finite element methodology for the elastodynamic wave propagation problem. The octree-based multiresolution finite element method reduces memory use significantly and improves overall computational performance. The framework is comprised of three parts; (1) an octree-based mesh generator, Euclid developed by TV and O'Hallaron, (2) a parallel mesh partitioner, ParMETIS developed by Karypis et al.[2], and (3) a parallel octree-based multiresolution finite element solver, QUAKE developed in this study. Realistic earthquakes parameters, soil material properties, and sedimentary basins dimensions will produce extremely large meshes. The out-of-core versional octree-based mesh generator, Euclid overcomes the resulting severe memory limitations. By using a parallel, distributed-memory graph partitioning algorithm, ParMETIS partitions large meshes, overcoming the memory and cost problem. Despite capability of the Octree-Based Multiresolution Mesh Method ( OBM3), large problem sizes necessitate parallelism to handle large memory and work requirements. The parallel OBM 3 elastic wave propagation code, QUAKE has been developed to address these issues. The numerical methodology and the framework have been used to simulate the seismic response of both idealized systems and of the Greater Los Angeles basin to simple pulses and to a mainshock of the 1994 Northridge Earthquake, for frequencies of up to 1 Hz and domain size of 80 km x 80 km x 30 km. In the idealized models, QUAKE shows good agreement with the analytical Green's function solutions. In the realistic models for the Northridge earthquake mainshock, QUAKE qualitatively agrees, with at most

  10. Earthquake Scaling, Simulation and Forecasting

    NASA Astrophysics Data System (ADS)

    Sachs, Michael Karl

    Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tohoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events. In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional

  11. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  12. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  13. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  14. Simulating Large-Scale Earthquake Dynamic Rupture Scenarios On Natural Fault Zones Using the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2014-05-01

    In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.

  15. Scaling behavior of the earthquake intertime distribution: influence of large shocks and time scales in the Omori law.

    PubMed

    Lippiello, Eugenio; Corral, Alvaro; Bottiglieri, Milena; Godano, Cataldo; de Arcangelis, Lucilla

    2012-12-01

    We present a study of the earthquake intertime distribution D(Δt) for a California catalog in temporal periods of short duration T. We compare experimental results with theoretical predictions and analytical approximate solutions. For the majority of intervals, rescaling intertimes by the average rate leads to collapse of the distributions D(Δt) on a universal curve, whose functional form is well fitted by a Gamma distribution. The remaining intervals, exhibiting a more complex D(Δt), are all characterized by the presence of large shocks. These results can be understood in terms of the relevance of the ratio between the characteristic time c in the Omori law and T: Intervals with Gamma-like behavior are indeed characterized by a vanishing c/T. The above features are also investigated by means of numerical simulations of the Epidemic Type Aftershock Sequence (ETAS) model. This study shows that collapse of D(Δt) is also observed in numerical catalogs; however, the fit with a Gamma distribution is possible only assuming that c depends on the main-shock magnitude m. This result confirms that the dependence of c on m, previously observed for m>6 main shocks, extends also to small m>2.

  16. Identification of elastic basin properties by large-scale inverse earthquake wave propagation

    NASA Astrophysics Data System (ADS)

    Epanomeritakis, Ioannis K.

    The importance of the study of earthquake response, from a social and economical standpoint, is a major motivation for the current study. The severe uncertainties involved in the analysis of elastic wave propagation in the interior of the earth increase the difficulty in estimating earthquake impact in seismically active areas. The need for recovery of information about the geological and mechanical properties of underlying soils motivates the attempt to apply inverse analysis on earthquake wave propagation problems. Inversion for elastic properties of soils is formulated as an constrained optimization problem. A series of trial mechanical soil models is tested against a limited-size set of dynamic response measurements, given partial knowledge of the target model and complete information on source characteristics, both temporal and geometric. This inverse analysis gives rise to a powerful method for recovery of a material model that produces the given response. The goal of the current study is the development of a robust and efficient computational inversion methodology for material model identification. Solution methods for gradient-based local optimization combine with robustification and globalization techniques to build an effective inversion framework. A Newton-based approach deals with the complications of the highly nonlinear systems generated in the inversion solution process. Moreover, a key addition to the inversion methodology is the application of regularization techniques for obtaining admissible soil models. Most importantly, the development and use of a multiscale strategy offers globalizing and robustifying advantages to the inversion process. In this study, a collection of results of inversion for different three-dimensional Lame moduli models is presented. The results demonstrate the effectiveness of the inversion methodology proposed and provide evidence for its capabilities. They also show the path for further study of elastic property

  17. From M8 to CyberShake: Using Large-Scale Numerical Simulations to Forecast Earthquake Ground Motions (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Cui, Y.; Olsen, K. B.; Graves, R. W.; Maechling, P. J.; Day, S. M.; Callaghan, S.; Milner, K.; Scec/Cme Collaboration

    2010-12-01

    Large earthquakes cannot be reliably and skillfully predicted in terms of their location, time, and magnitude. However, numerical simulations of seismic radiation from complex fault ruptures and wave propagation through 3D crustal structures have now advanced to the point where they can usefully predict the strong ground motions from anticipated earthquake sources. We describe a set of four computational pathways employed by the Southern California Earthquake Center (SCEC) to execute and validate these simulations. The methods are illustrated using the largest earthquakes anticipated on the southern San Andreas fault system. A dramatic example is the recent M8 dynamic-rupture simulation by Y. Cui, K. Olsen et al. (2010) of a magnitude-8 “wall-to-wall” earthquake on southern San Andreas fault, calculated to seismic frequencies of 2-Hz on a computational grid of 436 billion elements. M8 is the most ambitious earthquake simulation completed to date; the run took 24 hours on 223K cores of the NCCS Jaguar supercomputer, sustaining 220 teraflops. High-performance simulation capabilities have been implemented by SCEC in the CyberShake hazard model for the Los Angeles region. CyberShake computes over 400,000 earthquake simulations, managed through a scientific workflow system, to represent the probabilistic seismic hazard at a particular site up to seismic frequencies of 0.3 Hz. CyberShake shows substantial differences with conventional probabilistic seismic hazard analysis based on empirical ground-motion prediction. At the probability levels appropriate for long-term forecasting, these differences are most significant (and worrisome) in sedimentary basins, where the population is densest and the regional seismic risk is concentrated. The higher basin amplification obtained by CyberShake is due to the strong coupling between rupture directivity and basin-mode excitation. The simulations show that this coupling is enhanced by the tectonic branching structure of the San

  18. Anthropogenic Triggering of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor ``foreshocks'', since the induction may occur with a delay up to several years.

  19. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-26

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years.

  20. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  1. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  2. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  3. Large Rock Slope Failures Induced by Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Aydan, Ö.

    2016-06-01

    Recent earthquakes caused many large-scale rock slope failures. The scale and impact of rock slope failures are very large, and the form of failure differs depending upon the geological structures of slopes. First, the author briefly describes some model experiments to investigate the effects of shaking or faulting due to earthquakes on rock slopes. Then, fundamental characteristics of the rock slope failures induced by the earthquakes are described and evaluated according to some empirical and theoretical models. Furthermore, the observations for slope failures in relation to earthquake magnitude and epicenter or hypocenter distance were compared with several empirical relations available in the literature. Some of major rock slope failures induced by earthquakes are selected, and the post-failure motions are simulated and compared with observations. In addition, the effects of tsunamis on rock slopes in view of observations in the reconnaissances of the recent mega-earthquakes are explained and are discussed.

  4. Patterns of seismic activity preceding large earthquakes

    NASA Technical Reports Server (NTRS)

    Shaw, Bruce E.; Carlson, J. M.; Langer, J. S.

    1992-01-01

    A mechanical model of seismic faults is employed to investigate the seismic activities that occur prior to major events. The block-and-spring model dynamically generates a statistical distribution of smaller slipping events that precede large events, and the results satisfy the Gutenberg-Richter law. The scaling behavior during a loading cycle suggests small but systematic variations in space and time with maximum activity acceleration near the future epicenter. Activity patterns inferred from data on seismicity in California demonstrate a regional aspect; increased activity in certain areas are found to precede major earthquake events. One example is given regarding the Loma Prieta earthquake of 1989 which is located near a fault section associated with increased activity levels.

  5. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  6. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  7. Unsupervised polarimetric synthetic aperture radar classification of large-scale landslides caused by Wenchuan earthquake in hue-saturation-intensity color space

    NASA Astrophysics Data System (ADS)

    Li, Ning; Wang, Robert; Deng, Yunkai; Liu, Yabo; Li, Bochen; Wang, Chunle; Balz, Timo

    2014-01-01

    A simple and effective approach for unsupervised classification of large-scale landslides caused by the Wenchuan earthquake is developed. The data sets used were obtained by a high-resolution fully polarimetric airborne synthetic aperture radar system working at X-band. In the proposed approach, Pauli decomposition false-color RGB imagery is first transformed to the hue-saturation-intensity (HSI) color space. Then, a good combination of k-means clustering and HSI imagery in different channels is used stage-by-stage for automatic landslides extraction. Two typical case studies are presented to evaluate the feasibility of the proposed scheme. Our approach is an important contribution to the rapid assessment of landslide hazards.

  8. Model for repetitive cycles of large earthquakes

    SciTech Connect

    Newman, W.I.; Knopoff, L.

    1983-04-01

    The theory of the fusion of small cracks into large ones reproduces certain features also observed in the clustering of earthquake sequences. By modifying our earlier model to take into account the stress release associated with the occurrence of large earthquakes, we obtain repetitive periodic cycles of large earthquakes. A preliminary conclusion is that a combination of the stress release or elastic rebound mechanism plus time delays in the fusion process are sufficient to destabilize the crack populations and, ultimately, give rise to repetitive episodes of seismicity.

  9. Scaling in geology: landforms and earthquakes.

    PubMed

    Turcotte, D L

    1995-07-18

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior.

  10. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  11. Source characteristics of large strike-slip earthquakes

    NASA Astrophysics Data System (ADS)

    Song, Seok-Goo

    We investigate complex earthquake source processes using both spontaneous dynamic rupture modeling and kinematic finite-source inversion. Dynamic rupture modeling is an efficient tool with which we can examine how stress conditions and frictional behavior on a fault plane play a role in determining kinematic motions on the fault and the resulting ground motions at the Earth's surface. It enables us to develop a physical understanding of the earthquake rupture process in terms of Newtonian mechanics. We construct a set of spontaneous dynamic rupture models for several recent earthquakes in Japan and California in order to have a physical understanding of the earthquake source processes for several specific events. Our dynamic models are used to investigate the scaling properties of dynamic source parameters, i.e., fracture energy and stress drop. Many interesting features of the earthquake source process can also be inferred from the kinematic source inversion of observed seismic or geodetic data. We carry out a comprehensive source study of the 1906 San Francisco earthquake by re-analyzing both geodetic and seismic data in order to reconcile two existing, and mutually inconsistent, source models and obtain a unified one. Our study has important implications for seismic hazard in California, and perhaps more generally for large strike-slip earthquakes. Lastly it is important to utilize our knowledge of the earthquake source to improve our understanding of near-field ground motion characteristics because source complexities are quite uncertain and can be the dominant factor in determining the characteristics of near-field ground motion. We develop a pseudo-dynamic source modeling method with which we can generate physically self-consistent finite source models of large strike-slip earthquakes without high-cost, fully dynamic rupture simulation. The new pseudo-dynamic modeling method enables us to effectively characterize the earthquake source complexities for

  12. Small and large earthquakes: evidence for a different rupture beginning

    NASA Astrophysics Data System (ADS)

    Colombelli, Simona; Zollo, Aldo; Festa, Gaetano; Picozzi, Matteo

    2014-05-01

    For the real-time magnitude estimate two Early Warning (EW) parameters are usually measured within 3 seconds of P-wave signal. These are the initial peak displacement (Pd) and the average period (τc). The scaling laws between EW parameters and magnitude are robust and effective up to magnitude 6.5-7 but a well known saturation problem for both parameters is evident for larger earthquakes. The saturation is likely due to the source finiteness so that only a few seconds of the P-wave cannot capture the entire rupture process of a large event. Here we propose an evolutionary approach for the magnitude estimate, based on the progressive expansion of the P-wave time window, until the expected arrival of the S-waves. The methodology has already been applied to the 2011, Mw 9.0 Tohoku-Oki earthquake records and showed that a minimum time window of 25-30 seconds is indeed needed to get stable magnitude estimate for a magnitude M ≥ 8.5 earthquake. Here we extend the analysis to a larger data set of Japanese earthquakes with magnitude between 4 and 9, using a high number of records per earthquake and spanning wide distance and azimuth ranges. We analyze the relationship between the time evolution of EW parameters and the earthquake magnitude itself with the purpose to understand the evolution of these parameters during the rupture process and to investigate a possible different scaling for both small and large events. We show that the initial increase of P-wave motion is more rapid for small earthquakes that for larger ones, thus implying a longer and wider nucleation phase for large events. Our results indicate that earthquakes breaking in a region with a large critical slip displacement value have a larger probability to grow into a large size rupture than those originating in a region with a smaller critical displacement value.

  13. Sea-level changes before large earthquakes

    USGS Publications Warehouse

    Wyss, M.

    1978-01-01

    Changes in sea level have long been used as a measure of local uplift and subsidence associated with large earthquakes. For instance, in 1835, the British naturalist Charles Darwin observed that sea level dropped by 2.7 meters during the large earthquake in Concepcion, CHile. From this piece of evidence and the terraces along the beach that he saw, Darwin concluded that the Andes had grown to their present height through earthquakes. Much more recently, George Plafker and James C. Savage of the U.S Geological Survey have shown, from barnacle lines, that the great 1960 Chile and the 1964 Alaska earthquakes caused several meters of vertical displacement of the shoreline. 

  14. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  15. Afterslip and Viscoelastic Relaxation Model Inferred from the Large Scale Postseismic Deformation Following the 2010 Mw 8,8 Maule Earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Vigny, C.; Klein, E.; Fleitout, L.; Garaud, J. D.

    2015-12-01

    Postseismic deformation following the large subduction earthquake of Maule (Chile, Mw8.8, February 27th 2010) have been closely monitored with GPS from 70 km up to 2000 km away from the trench. They exhibit a behavior generally similar to that already observed after the Aceh and Tohoku-Oki earthquakes. Vertical uplift is observed on the volcanic arc and a moderate large scale subsidence is associated with sizeable horizontal deformation in the far-field (500-2000km from the trench). In addition, near-field data (70-200km from the trench) feature a rather complex deformation pattern. A 3D FE code (Zebulon Zset) is used to relate these deformation to slip on the plate interface and relaxation in the mantle. The mesh features a spherical shell-portion from the core-mantle boundary to the Earth's surface, extending over more than 60 degrees in latitude and longitude. The overridding and subducting plates are elastic, and the asthenosphere is viscoelastic. A viscoelastic Low Viscosity Channel (LVC) is also introduced along the plate interface. Both the asthenosphere and the channel feature Burger's rheologies and we invert for their mechanical properties and geometrical characteristics simultaneously with the afterslip distribution. The horizontal deformation pattern requires relaxation both in i) the asthenosphere extending down to 270km, with a 'long-term' viscosity of the order of 4.8.1018 Pa.s and ii) in the channel, that has to extend from depth of 50 to 150 km with viscosities slightly below 1018 Pa.s, to fit well the vertical velocity pattern (intense and quick uplift over the Cordillera). Aseismic slip on the plate interface, at shallow depth, is necessary to explain all the characteristics of the near-field displacements. We then detect two main patches of high slip, one updip of the coseismic slip distribution in the northernmost part of the rupture zone, and the other one downdip, at the latitude of Constitucion (35°S). We finally study the temporel

  16. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn; Borchardt, Glenn; Hirschfeld, Sue E.; Lienkaemper, James J.; McClellan, Patrick H.; Williams, Patrick L.; Wong, Ivan G.

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  17. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  18. Scaling of seismic memory with earthquake size

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene

    2012-07-01

    It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.

  19. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  20. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  1. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  2. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  3. Mw Dependence of Ionospheric Electron Enhancement Immediately Before Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2015-12-01

    Ionospheric electrons were reported to have increased ~40 minutes before the 2011 Tohoku-oki (Mw9.0) earthquake, Japan, by observing total electron content (TEC) with GNSS receivers [e.g. Heki and Enomoto, 2013]. They further demonstrated that similar TEC enhancements preceded all the recent earthquakes with Mw of 8.5 or more. Their reality has been repeatedly questioned due mainly to the ambiguity in the derivation of the reference TEC curves from which anomalies are defined [e.g. Masci et al., 2015]. Here we propose a numerical approach, based on Akaike's Information Criterion, to detect positive breaks (sudden increase of TEC rate) in the vertical TEC time series without using reference curves. We demonstrate that such breaks are detected 20-80 minutes before the ten recent large earthquakes with Mw7.8-9.2. The amounts of breaks were found to depend on the background absolute VTEC and Mw, i.e. Break (TECU/h)=4.74Mw+0.13VTEC-39.86, with the standard deviation of ~1.2 TECU/h. We can convert this equation to Mw = (Break-0.13VTEC+39.86)/4.74, which can tell us the Mw of impending earthquakes with uncertainty of ~0.25. The precursor times were longer for larger earthquakes, ranging from ~80 minutes for the largest (2004 Sumatra-Andaman) to ~21 minutes for the smallest (2015 Nepal). The precursors of intraplate earthquakes (e.g. 2012 Indian Ocean) started significantly earlier than interplate ones. We performed the same analyses during periods without earthquakes, and found that positive breaks comparable to that before the 2011 Tohoku-oki earthquake occur once in 20 hours. They originate from small amplitude Large-scale Travelling Ionospheric Disturbances (LSTID), which are excited in the auroral oval and move southward with the velocity of internal gravity waves. This probability is small enough to rule out the fortuity of these breaks, but large enough to make it a challenge to apply preseismic TEC enhancements for short-term earthquake prediction.

  4. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  5. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  6. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  7. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Razafindrakoto, Hoby N. T.; Mai, P. Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran K. S.

    2015-07-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  8. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  9. Web-Based Interrogation of Large-Scale Geophysical Data Sets and Clustering Analysis of Many Earthquake Events From Desktop and Handheld Computers

    NASA Astrophysics Data System (ADS)

    Garbow, Z. A.; Erlebacher, G.; Yuen, D. A.; Sevre, E. O.; Nagle, A. R.; Kaneko, J. Y.

    2002-12-01

    The size of datasets in the geosciences is growing at a tremendous pace due to inexpensive memory, increasingly large storage space, fast processors and constantly improving data-collection instruments. However, the available bandwidth increases at a much slower rate and consequently cannot keep up with the size of the datasets themselves. Coupled with our need to explore the large datasets in a simplified point of view, the current approach of transferring full datasets from one machine to another in order to analyze it is fast becoming impractical and obsolete. We have previously developed a web-based interactive data interrogation system that allows users to remotely analyze geophysical data over the Internet using a client-server paradigm (Garbow et al., Electronic Geosciences, Vol. 6, 2001). To further our idea of interactive data extraction we have used this interrogative system to explore both high-resolution mantle convection data and earthquake clusters involving up to tens of thousands of earthquakes. In addition, we have ported this system to work from handheld devices via wireless connections. Our system uses a combination of Java, Python, and C for running remotely from a desktop computer, laptop, or even a handheld device, while incorporating the power and memory capacity of a large workstation server. Because of the limitations of the current generation of handheld devices in terms of processing power, screen size, memory and storage, they have not yet become practical vehicles for useful scientific work. Our aim is to successfully overcome the limitations of handheld devices to allow them in the near future to be used as portable scientific research laboratories, particularly with the new, more powerful processors (e.g. Transmeta Crusoe) just over the horizon.

  10. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  11. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  12. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  13. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  14. Linking Oceanic Tsunamis and Geodetic Gravity Changes of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Fu, Yuning; Song, Y. Tony; Gross, Richard S.

    2017-03-01

    Large earthquakes at subduction zones usually generate tsunamis and coseismic gravity changes. These two independent oceanic and geodetic signatures of earthquakes can be observed individually by modern geophysical observational networks. The Gravity Recovery and Climate Experiment twin satellites can detect gravity changes induced by large earthquakes, while altimetry satellites and Deep-Ocean Assessment and Reporting of Tsunamis buoys can observe resultant tsunamis. In this study, we introduce a method to connect the oceanic tsunami measurements with the geodetic gravity observations, and apply it to the 2004 Sumatra Mw 9.2 earthquake, the 2010 Maule Mw 8.8 earthquake and the 2011 Tohoku Mw 9.0 earthquake. Our results indicate consistent agreement between these two independent measurements. Since seafloor displacement is still the largest puzzle in assessing tsunami hazards and its formation mechanism, our study demonstrates a new approach to utilizing these two kinds of measurements for better understanding of large earthquakes and tsunamis.

  15. Large Earthquake Potential in the Southeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Mora-Paez, H.; Bilham, R. G.; Lafemina, P.; Mattioli, G. S.; Molnar, P. H.; Audemard, F. A.; Perez, O. J.

    2015-12-01

    The axis of rotation describing relative motion of the Caribbean plate with respect to South America lies in Canada near Hudson's Bay, such that the Caribbean plate moves nearly due east relative to South America [DeMets et al. 2010]. The plate motion is absorbed largely by pure strike slip motion along the El Pilar Fault in northeastern Venezuela, but in northwestern Venezuela and northeastern Colombia, the relative motion is distributed over a wide zone that extends from offshore to the northeasterly trending Mérida Andes, with the resolved component of convergence between the Caribbean and South American plates estimated at ~10 mm/yr. Recent densification of GPS networks through COLOVEN and COCONet including access to private GPS data maintained by Colombia and Venezuela allowed the development of a new GPS velocity field. The velocity field, processed with JPL's GOA 6.2, JPL non-fiducial final orbit and clock products and VMF tropospheric products, includes over 120 continuous and campaign stations. This new velocity field along with enhanced seismic reflection profiles, and earthquake location analysis strongly suggest the existence of an active oblique subduction zone. We have also been able to use broadband data from Venezuela to search slow-slip events as an indicator of an active subduction zone. There are caveats to this hypothesis, however, including the absence of volcanism that is typically concurrent with active subduction zones and a weak historical record of great earthquakes. A single tsunami deposit dated at 1500 years before present has been identified on the southeast Yucatan peninsula. Our simulations indicate its probable origin is within our study area. We present a new GPS-derived velocity field, which has been used to improve a regional block model [based on Mora and LaFemina, 2009-2012] and discuss the earthquake and tsunami hazards implied by this model. Based on the new geodetic constraints and our updated block model, if part of the

  16. Time-Dependent Earthquake Forecasts on a Global Scale

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Graves, W. R.

    2014-12-01

    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged ("Poisson") forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. An example is the Working Group on California Earthquake Probabilities, which has been using fault-based models to compute conditional probabilities in California since 1988. An example of a forecast is the Epidemic-Type Aftershock Sequence (ETAS), which is based on the Gutenberg-Richter (GR) magnitude-frequency law, the Omori aftershock law, and Poisson statistics. The method discussed in this talk is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as "markers" for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous research, we used conditional Weibull statistics to convert event counts into a temporal probability for a given fixed region. In the present paper, we move belyond a fixed region, and develop a method to compute these Natural Time Weibull (NTW) forecasts on a global scale, using an internally consistent method, in regions of arbitrary shape and size. We develop and implement these methods on a modern web-service computing platform, which can be found at www.openhazards.com and www.quakesim.org. We also discuss constraints on the User Interface (UI) that follow from practical considerations of site usability.

  17. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  18. Absence of remotely triggered large earthquakes beyond the mainshock region

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2011-01-01

    Large earthquakes are known to trigger earthquakes elsewhere. Damaging large aftershocks occur close to the mainshock and microearthquakes are triggered by passing seismic waves at significant distances from the mainshock. It is unclear, however, whether bigger, more damaging earthquakes are routinely triggered at distances far from the mainshock, heightening the global seismic hazard after every large earthquake. Here we assemble a catalogue of all possible earthquakes greater than M 5 that might have been triggered by every M 7 or larger mainshock during the past 30 years. We compare the timing of earthquakes greater than M 5 with the temporal and spatial passage of surface waves generated by large earthquakes using a complete worldwide catalogue. Whereas small earthquakes are triggered immediately during the passage of surface waves at all spatial ranges, we find no significant temporal association between surface-wave arrivals and larger earthquakes. We observe a significant increase in the rate of seismic activity at distances confined to within two to three rupture lengths of the mainshock. Thus, we conclude that the regional hazard of larger earthquakes is increased after a mainshock, but the global hazard is not.

  19. Examining Earthquake Scaling Via Event Ratio Levels

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Yoo, S.; Mayeda, K. M.; Gok, R.

    2013-12-01

    A challenge with using corner frequency to interpret stress parameter scaling is that stress drop and apparent stress are related to the cube of the corner frequency. In practice this leads to high levels of uncertainty in measured stress from since the uncertainty in measuring the corner frequency is cubed to determine uncertainty in the stress parameters. We develop a new approach using the low and high frequency levels of spectral ratios between two closely located events recorded at the same stations. This approach has a number of advantages over more traditional corner frequency fitting, either in spectral ratios or individual spectra. First, if the bandwidth of the spectral ratio is sufficient, the levels can be measured at many individual frequency points and averaged, reducing the measurement error. Second the apparent stress (and stress drop) are related to the high frequency level to the 3/2 power so the measurement uncertainty is not as amplified as when using the corner frequency. Finally, if the bandwidth is sufficiently broad to determine both the spectral ratio low and high frequency levels, the apparent stress (or stress drop) ratio can be determined without the need to use any other measurements (e.g., moment, fault area), which of course have their own measurement uncertainties. We will show a number examples taken from a wide variety of crustal earthquake sequences. Example of the sigmoid formed by a spectral ratio between two hypothetical events for two different cases of stress scaling using the models described in this paper. Event 1 is Mw 6.0 event and event 2 is an Mw 4.0 event. In the self-similar case both have an apparent stress of 3 MPa, in the non-self-similar case the large event apparent stress is 3 MPA and the smaller one is 1 MPa. Note that ratio reaches different constant levels. The low frequency level (LVL) is the ratio of the moments and high frequency level (HFL) depends on the stress parameters. In this paper we derive the

  20. Time-predictable recurrence model for large earthquakes

    SciTech Connect

    Shimazaki, K.; Nakata, T.

    1980-04-01

    We present historical and geomorphological evidence of a regularity in earthquake recurrence at three different sites of plate convergence around the Japan arcs. The regularity shows that the larger an earthquake is, the longer is the following quiet period. In other words, the time interval between two successive large earthquakes is approximately proportional to the amount of coseismic displacement of the preceding earthquake and not of the following earthquake. The regularity enables us, in principle, to predict the approximate occurrence time of earthquakes. The data set includes 1) a historical document describing repeated measurements of water depth at Murotsu near the focal region of Nankaido earthquakes, 2) precise levelling and /sup 14/C dating of Holocene uplifted terraces in the southern boso peninsula facing the Sagami trough, and 3) similar geomorphological data on exposed Holocene coral reefs in Kikai Island along the Ryukyu arc.

  1. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-04

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes.

  2. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  3. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  4. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  5. Ultralow-Frequency Magnetic Fields Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Fraser-Smith, Antony C.

    2008-06-01

    The Great Alaska Earthquake (M 9.2) of 27 March 1964 was the largest earthquake ever to strike the United States in modern times and one of the largest ever recorded anywhere. Later that year, Moore [1964], in a surprisingly rarely cited paper, reported the occurrence of strong ultralow-frequency (ULF; <=10 hertz) magnetic field disturbances at Kodiak, Alaska, in the 1.2 hours before the earthquake. That report has since been followed by others [Fraser-Smith et al., 1990; Kopytenko et al., 1993; Hayakawa et al., 1996; see also Molchanov et al., 1992] similarly describing the occurrence of large-amplitude ULF magnetic field fluctuations before other large earthquakes (``large'' describes earthquakes with magnitudes M ~ 7 or greater). These reports involving four separate, large earthquakes were made by four different groups and the results were published in well-known, refereed scientific journals, so there is no doubt that there is evidence for the existence of comparatively large ULF magnetic field fluctuations preceding large earthquakes.

  6. A Large Scale Automatic Earthquake Location Catalog in the San Jacinto Fault Zone Area Using An Improved Shear-Wave Detection Algorithm

    NASA Astrophysics Data System (ADS)

    White, M. C. A.; Ross, Z.; Vernon, F.; Ben-Zion, Y.

    2015-12-01

    UC San Diego's ANZA network began archiving event-triggered data in 1982. As a result of improved recording technology, continuous waveform data archives are available starting in 1998. This continuous dataset, from 1998-present, represents a wealth of potential insight into spatio-temporal seismicity patterns, earthquake physics and mechanics of the San Jacinto Fault Zone. However, the volume of data renders manual analysis costly. In order to investigate the characteristics of the data in space and time, an automatic earthquake location catalog is needed. To this end, we apply standard earthquake signal processing techniques to the continuous data to detect first-arriving P-waves in combination with a recently developed S-wave detection algorithm. The resulting dataset of arrival time observations are processed using a grid association algorithm to produce initial absolute locations which are refined using a location inversion method that accounts for 3-D velocity heterogeneities. Precise relative locations are then derived from the refined absolute locations using the HypoDD double-difference algorithm. Moment magnitudes for the events are estimated from multi-taper spectral analysis. A >650% increase in the S:P pick ratio is achieved using the updated S-wave detection algorithm, when compared to the currently available catalog for the ANZA network. The increased number of S-wave observations leads to improved earthquake location accuracy and reliability (ie. less false event detections). Various aspects of spatio-temporal seismicity patterns and size distributions are investigated. Updated results will be presented at the meeting.

  7. Scaling of intraplate earthquake recurrence interval with fault length and implications for seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Marrett, Randall

    1994-12-01

    Consensus indicates that faults follow power-law scaling, although significant uncertainty remains about the values of important parameters. Combining these scaling relationships with power-law scaling relationships for earthquakes suggests that intraplate earthquake recurrence interval scales with fault length. Regional scaling data may be locally calibrated to yield a site-specific seismic hazard assessment tool. Scaling data from small faults (those that do not span the seismogenic layer) suggest that recurrence interval varies as a negative power of fault length. Due to uncertainties regarding the recently recognized changes in scaling for large earthquakes, it is unclear whether recurrence interval varies as a negative or positive power of fault length for large fauts (those that span the seismogenic layer). This question is of critical importance for seismic hazard assessment.

  8. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  9. Detection capability of global earthquakes influenced by large intermediate-depth and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2011-12-01

    This study examined the detection capability of the global CMT catalogue immediately after a large intermediate-depth (70 < depth ≤ 300 km) or deep (300 km < depth) earthquake. Iwata [2008, GJI] have revealed that the detection capability is remarkably lower than ordinary one for several hours after the occurrence of a large shallow (depth ≤ 70 km) earthquake. Since the global CMT catalogue plays an important role in studies on global earthquake forecasting or seismicity pattern [e.g., Kagan and Jackson, 2010, Pageoph], the characteristic of the catalogue should be investigated carefully. We stacked global shallow earthquake sequences, which are taken from the global CMT catalogue from 1977 to 2010, after a large intermediate-depth or deep earthquake. Then, we utilized a statistical model representing an observed magnitude-frequency distribution of earthquakes [e.g., Ringdal, 1975, BSSA; Ogata and Katsura, 1993, GJI]. The applied model is a product of the Gutenberg-Richter law and a detection rate function q(M). Following previous studies, the cumulative distribution of the normal distribution was used as q(M). This model enables us to estimate μ, the magnitude where the detection rate of earthquake is 50 per cent. Finally, a Bayesian approach with a piecewise linear approximation [Iwata, 2008, GJI] was applied to this stacked data to estimate the temporal change of μ. Consequently, we found a significantly lowered detection capability after a intermediate-depth or deep earthquake of which magnitude is 6.5 or larger. The lowered detection capability lasts for several hours or one-half day. During this period of low detection capability, a few per cent of M ≥ 6.0 earthquakes or a few tens percent of M ≥ 5.5 earthquakes are undetected in the global CMT catalogue while the magnitude completeness threshold of the catalogue was estimated to be around 5.5 [e.g., Kagan, 2003, PEPI].

  10. Tremor and the Depth Extent of Slip in Large Earthquakes

    NASA Astrophysics Data System (ADS)

    BEroza, G. C.; Brown, J. R.; Ide, S.

    2013-05-01

    We survey the evidence for the distribution of tremor and mainshock slip. In Southwest Japan, where tremor is well located, it outlines the down-dip edge of slip in the 1944 and 1946 Nankai earthquakes. In Alaska and the Aleutians, tremor location and slip distributions in slip are subject to greater uncertainty, but within that uncertainty they are consistent with the notion that tremor outlines the down-dip limit of mainshock slip. In Mexico, tremor locations and the extent of rupture in large (M > 7) earthquakes are also uncertain, but show a similar relationship. Taken together, these observations suggest that tremor may provide important information on the depth extent of rupture in large earthquakes where there have been no large earthquakes to test that hypothesis. If applied to the Cascadia subduction zone, it suggests slip will extend farther inland than previously assumed. If applied to the San Andreas Fault, it suggests slip will extend deeper than has previously been assumed.

  11. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  12. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. Deeper penetration of large earthquakes on seismically quiescent faults

    NASA Astrophysics Data System (ADS)

    Jiang, Junle; Lapusta, Nadia

    2016-06-01

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  14. Deeper penetration of large earthquakes on seismically quiescent faults.

    PubMed

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  15. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  16. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  17. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  18. Large earthquakes create vertical permeability by breaching aquitards

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Yuen; Liao, Xin; Wang, Lee-Ping; Wang, Chung-Ho; Manga, Michael

    2016-08-01

    Hydrologic responses to earthquakes and their mechanisms have been widely studied. Some responses have been attributed to increases in the vertical permeability. However, basic questions remain: How do increases in the vertical permeability occur? How frequently do they occur? Is there a quantitative measure for detecting the occurrence of aquitard breaching? We try to answer these questions by examining data from a dense network of ˜50 monitoring stations of clustered wells in a sedimentary basin near the epicenter of the 1999 M7.6 Chi-Chi earthquake in western Taiwan. While most stations show evidence that confined aquifers remained confined after the earthquake, about 10% of the stations show evidence of coseismic breaching of aquitards, creating vertical permeability as high as that of aquifers. The water levels in wells without evidence of coseismic breaching of aquitards show tidal responses similar to that of a confined aquifer before and after the earthquake. Those wells with evidence of coseismic breaching of aquitards, on the other hand, show distinctly different postseismic tidal response. Furthermore, the postseismic tidal response of different aquifers became strikingly similar, suggesting that the aquifers became hydraulically connected and the connection was maintained many months thereafter. Breaching of aquitards by large earthquakes has significant implications for a number of societal issues such as the safety of water resources, the security of underground waste repositories, and the production of oil and gas. The method demonstrated here may be used for detecting the occurrence of aquitard breaching by large earthquakes in other seismically active areas.

  19. Large earthquake processes in the northern Vanuatu subduction zone

    NASA Astrophysics Data System (ADS)

    Cleveland, K. Michael; Ammon, Charles J.; Lay, Thorne

    2014-12-01

    The northern Vanuatu (formerly New Hebrides) subduction zone (11°S to 14°S) has experienced large shallow thrust earthquakes with Mw > 7 in 1966 (MS 7.9, 7.3), 1980 (Mw 7.5, 7.7), 1997 (Mw 7.7), 2009 (Mw 7.7, 7.8, 7.4), and 2013 (Mw 8.0). We analyze seismic data from the latter four earthquake sequences to quantify the rupture processes of these large earthquakes. The 7 October 2009 earthquakes occurred in close spatial proximity over about 1 h in the same region as the July 1980 doublet. Both sequences activated widespread seismicity along the northern Vanuatu subduction zone. The focal mechanisms indicate interplate thrusting, but there are differences in waveforms that establish that the events are not exact repeats. With an epicenter near the 1980 and 2009 events, the 1997 earthquake appears to have been a shallow intraslab rupture below the megathrust, with strong southward directivity favoring a steeply dipping plane. Some triggered interplate thrusting events occurred as part of this sequence. The 1966 doublet ruptured north of the 1980 and 2009 events and also produced widespread aftershock activity. The 2013 earthquake rupture propagated southward from the northern corner of the trench with shallow slip that generated a substantial tsunami. The repeated occurrence of large earthquake doublets along the northern Vanuatu subduction zone is remarkable considering the doublets likely involved overlapping, yet different combinations of asperities. The frequent occurrence of large doublet events and rapid aftershock expansion in this region indicate the presence of small, irregularly spaced asperities along the plate interface.

  20. 1/f and the Earthquake Problem: Scaling constraints to facilitate operational earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Yoder, M. R.; Rundle, J. B.; Glasscoe, M. T.

    2013-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or '1/f', nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this '1/f problem,' it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area), in combination with a metric to quantify rate trends in local seismicity, to the local earthquake magnitude potential - the magnitudes of earthquakes the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.

  1. Crack fusion dynamics: A model for large earthquakes

    SciTech Connect

    Newman, W.I.; Knopoff, L.

    1982-07-01

    The physical processes of the fusion of small cracks into larger ones are nonlinear in character. A study of the nonlinear properties of fusion may lead to an understanding of the instabilities that give rise to clustering of large earthquakes. We have investigated the properties of simple versions of fusion processes to see if instabilities culminating in repetitive massive earthquakes are possible. We have taken into account such diverse phenomena as the production of aftershocks, the rapid extension of large cracks to overwhelm and absorb smaller cracks, the influence of anelastic creep-induced time delays, healing, the genesis of ''juvenile'' cracks due to plate motions, and others. A preliminary conclusion is that the time delays introduced by anelastic creep may be responsible for producing catastrophic instabilities characteristic of large earthquakes as well as aftershock sequences. However, it seems that nonlocal influences, i.e., the spatial diffusion of cracks, may play a dominant role in producing episodes of seismicity and clustering.

  2. Power-law time distribution of large earthquakes.

    PubMed

    Mega, Mirko S; Allegrini, Paolo; Grigolini, Paolo; Latora, Vito; Palatella, Luigi; Rapisarda, Andrea; Vinciguerra, Sergio

    2003-05-09

    We study the statistical properties of time distribution of seismicity in California by means of a new method of analysis, the diffusion entropy. We find that the distribution of time intervals between a large earthquake (the main shock of a given seismic sequence) and the next one does not obey Poisson statistics, as assumed by the current models. We prove that this distribution is an inverse power law with an exponent mu=2.06+/-0.01. We propose the long-range model, reproducing the main properties of the diffusion entropy and describing the seismic triggering mechanisms induced by large earthquakes.

  3. An earthquake strength scale for the media and the public

    USGS Publications Warehouse

    Johnston, A.C.

    1990-01-01

    A local engineer, E.P Hailey, pointed this problem out to me shortly after the Loma Prieta earthquake. He felt that three problems limited the usefulness of magnitude in describing an earthquake to the public; (1) most people don't understand that it is not a linear scale; (2) of those who do realized the scale is not linear, very few understand the difference of a factor of ten in ground motion and 32 in energy release between points on the scale; and (3) even those who understand the first two points have trouble putting a given magnitude value into terms they can relate to. In summary, Mr. Hailey wondered why seismologists can't come up with an earthquake scale that doesn't confuse everyone and that conveys a sense of true relative size. Here, then, is m attempt to construct such a scale

  4. Large Earthquakes in Developing Countries: Estimating and Reducing their Consequences

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2003-12-01

    Recent efforts to reduce the risk of earthquakes in developing countries have been diverse, earnest, and inadequate. The earthquake risk in developing countries is large and growing rapidly. It is largely ignored. Unless something is done - quickly - to reduce it, both developing and developed countries will suffer human and economic losses far greater than have been experienced in the past. GeoHazards International (GHI) is a nonprofit organization that has attempted to reduce the death and suffering caused by earthquakes in the world's most vulnerable communities, through preparedness, mitigation and prevention. Its approach has included raising awareness, strengthening local institutions and launching mitigation activities, particularly for schools. GHI and its partners around the world have achieved some success: thousands of school children are safer, hundreds of cities are aware of their risk, tens of cities have been assessed and advised, and some local organizations have been strengthened. But there is disturbing evidence that what is being done is insufficient. The problem outpaces the cure. A new program is now being considered that would attempt to improve earthquake-resistant construction of schools, internationally, by publicizing well-managed programs around the world that design, construct and maintain earthquake-resistant schools. While focused on schools, this program might have broader applications in the future.

  5. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  6. Large historical earthquakes and tsunamis in a very active tectonic rift: the Gulf of Corinth, Greece

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Ioanna; Papadopoulos, Gerassimos

    2014-05-01

    The Gulf of Corinth is an active tectonic rift controlled by E-W trending normal faults with an uplifted footwall in the south and a subsiding hangingwall with antithetic faulting in the north. Regional geodetic extension rates up to about 1.5 cm/yr have been measured, which is one of the highest for tectonic rifts in the entire Earth, while seismic slip rates up to about 1 cm/yr were estimated. Large earthquakes with magnitudes, M, up to about 7 were historically documented and instrumentally recorded. In this paper we have compiled historical documentation of earthquake and tsunami events occurring in the Corinth Gulf from the antiquity up to the present. The completeness of the events reported improves with time particularly after the 15th century. The majority of tsunamis were caused by earthquake activity although the aseismic landsliding is a relatively frequent agent for tsunami generation in Corinth Gulf. We focus to better understand the process of tsunami generation from earthquakes. To this aim we have considered the elliptical rupture zones of all the strong (M≥ 6.0) historical and instrumental earthquakes known in the Corinth Gulf. We have taken into account rupture zones determined by previous authors. However, magnitudes, M, of historical earthquakes were recalculated from a set of empirical relationships between M and seismic intensity established for earthquakes occurring in Greece during the instrumental era of seismicity. For this application the macroseismic field of each one of the earthquakes was identified and seismic intensities were assigned. Another set of empirical relationships M/L and M/W for instrumentally recorded earthquakes in the Mediterranean region was applied to calculate rupture zone dimensions; where L=rupture zone length, W=rupture zone width. The rupture zones positions were decided on the basis of the localities of the highest seismic intensities and co-seismic ground failures, if any, while the orientation of the maximum

  7. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity.

  8. Earthquake Hazard and Risk Assessment Based on Unified Scaling Law for Earthquakes: State of Gujarat, India

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Nekrasova, Anastasia; Kossobokov, Vladimir

    2017-03-01

    The Gujarat state of India is one of the most seismically active intercontinental regions of the world. Historically, it has experienced many damaging earthquakes including the devastating 1819 Rann of Kachchh and 2001 Bhuj earthquakes. The effect of the later one is grossly underestimated by the Global Seismic Hazard Assessment Program (GSHAP). To assess a more adequate earthquake hazard for the state of Gujarat, we apply Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter recurrence relation taking into account naturally fractal distribution of earthquake loci. USLE has evident implications since any estimate of seismic hazard depends on the size of the territory considered and, therefore, may differ dramatically from the actual one when scaled down to the proportion of the area of interest (e.g. of a city) from the enveloping area of investigation. We cross-compare the seismic hazard maps compiled for the same standard regular grid 0.2° × 0.2° (1) in terms of design ground acceleration based on the neo-deterministic approach, (2) in terms of probabilistic exceedance of peak ground acceleration by GSHAP, and (3) the one resulted from the USLE application. Finally, we present the maps of seismic risks for the state of Gujarat integrating the obtained seismic hazard, population density based on India's Census 2011 data, and a few model assumptions of vulnerability.

  9. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: State of Gujarat, India

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Kossobokov, Vladimir; Parvez, Imtiyaz

    2016-04-01

    The Gujarat state of India is one of the most seismically active intercontinental regions of the world. Historically, it has experienced many damaging earthquakes including the devastating 1819 Rann of Kutch and 2001 Bhuj earthquakes. The effect of the later one is grossly underestimated by the Global Seismic Hazard Assessment Program (GSHAP). To assess a more adequate earthquake hazard for the state of Gujarat, we apply Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter recurrence relation taking into account naturally fractal distribution of earthquake loci. USLE has evident implications since any estimate of seismic hazard depends on the size of the territory considered and, therefore, may differ dramatically from the actual one when scaled down to the proportion of the area of interest (e.g. of a city) from the enveloping area of investigation. We cross compare the seismic hazard maps compiled for the same standard regular grid 0.2°×0.2° (i) in terms of design ground acceleration (DGA) based on the neo-deterministic approach, (ii) in terms of probabilistic exceedance of peak ground acceleration (PGA) by GSHAP, and (iii) the one resulted from the USLE application. Finally, we present the maps of seismic risks for the state of Gujarat integrating the obtained seismic hazard, population density based on 2011 census data, and a few model assumptions of vulnerability.

  10. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  11. Fast rupture propagation for large strike-slip earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Mori, Jim; Koketsu, Kazuki

    2016-04-01

    Studying rupture speeds of shallow earthquakes is of broad interest because it has a large effect on the strong near-field shaking that causes damage during earthquakes, and it is an important parameter that reflects stress levels and energy on a slipping fault. However, resolving rupture speed is difficult in standard waveform inversion methods due to limited near-field observations and the tradeoff between rupture speed and fault size for teleseismic observations. Here we applied back-projection methods to estimate the rupture speeds of 15 Mw ≥ 7.8 dip-slip and 8 Mw ≥ 7.5 strike-slip earthquakes for which direct P waves are well recorded in Japan on Hi-net, or in North America on USArray. We found that all strike-slip events had very fast average rupture speeds of 3.0-5.0 km/s, which are near or greater than the local shear wave velocity (supershear). These values are faster than for thrust and normal faulting earthquakes that generally rupture with speeds of 1.0-3.0 km/s.

  12. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  13. Calibration of magnitude scales for earthquakes of the Mediterranean

    NASA Astrophysics Data System (ADS)

    Gardini, Domenico; di Donato, Maria; Boschi, Enzo

    In order to provide the tools for uniform size determination for Mediterranean earthquakes over the last 50-year period of instrumental seismology, we have regressed the magnitude determinations for 220 earthquakes of the European-Mediterranean region over the 1977-1991 period, reported by three international centres, 11 national and regional networks and 101 individual stations and observatories, using seismic moments from the Harvard CMTs. We calibrate M(M0) regression curves for the magnitude scales commonly used for Mediterranean earthquakes (ML, MWA, mb, MS, MLH, MLV, MD, M); we also calibrate static corrections or specific regressions for individual observatories and we verify the reliability of the reports of different organizations and observatories. Our analysis shows that the teleseismic magnitudes (mb, MS) computed by international centers (ISC, NEIC) provide good measures of earthquake size, with low standard deviations (0.17-0.23), allowing one to regress stable regional calibrations with respect to the seismic moment and to correct systematic biases such as the hypocentral depth for MS and the radiation pattern for mb; while mb is commonly reputed to be an inadequate measure of earthquake size, we find that the ISC mb is still today the most precise measure to use to regress MW and M0 for earthquakes of the European-Mediterranean region; few individual observatories report teleseismic magnitudes requiring specific dynamic calibrations (BJI, MOS). Regional surface-wave magnitudes (MLV, MLH) reported in Eastern Europe generally provide reliable measures of earthquake size, with standard deviations often in the 0.25-0.35 range; the introduction of a small (±0.1-0.2) static station correction is sometimes required. While the Richter magnitude ML is the measure of earthquake size most commonly reported in the press whenever an earthquake strikes, we find that ML has not been computed in the European-Mediterranean in the last 15 years; the reported local

  14. Galaxy clustering on large scales.

    PubMed Central

    Efstathiou, G

    1993-01-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  15. Exploring the uncertainty range of coseismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2017-01-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average coseismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same data set exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that (1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated and (2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  16. Premonitory patterns of seismicity months before a large earthquake: five case histories in Southern California.

    PubMed

    Keilis-Borok, V I; Shebalin, P N; Zaliapin, I V

    2002-12-24

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with "intermediate" lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1-7 months five large (M > or = 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns.

  17. Evaluation of factors controlling large earthquake-induced landslides by the Wenchuan earthquake

    NASA Astrophysics Data System (ADS)

    Chen, X. L.; Ran, H. L.; Yang, W. T.

    2012-12-01

    During the 12 May 2008, Wenchuan earthquake in China, more than 15 000 landslides were triggered by the earthquake. Among these landslides, there were 112 large landslides generated with a plane area greater than 50 000 m2. These large landslides were markedly distributed closely along the surface rupture zone in a narrow belt and were mainly located on the hanging wall side. More than 85% of the large landslides are presented within the range of 10 km from the rupture. Statistical analysis shows that more than 50% of large landslides occurred in the hard rock and second-hard rock, like migmatized metamorphic rock and carbonate rock, which crop out in the south part of the damaged area with higher elevation and steeper landform in comparison with the northeast part of the damaged area. All large landslides occurred in the region with seismic intensity ≥ X except a few of landslides in the Qingchuan region with seismic intensity IX. Spatially, the large landslides can be centred into four segments, namely the Yingxiu, the Gaochuan, the Beichuan and the Qingchuan segments, from southwest to northeast along the surface rupture. This is in good accordance with coseismic displacements. With the change of fault type from reverse-dominated slip to dextral slip from southwest to northeast, the largest distance between the triggered large landslides and the rupture decreases from 15 km to 5 km. The critical acceleration ac for four typical large landslides in these four different segments were estimated by the Newmark model in this paper. Our results demonstrate that, given the same strength values and slope angles, the characteristics of slope mass are important for slope stability and deeper landslides are less stable than shallower landslides. Comprehensive analysis reveals that the large catastrophic landslides could be specifically tied to a particular geological setting where fault type and geometry change abruptly. This feature may dominate the occurrence of large

  18. Mechanical model of precursory source processes for some large earthquakes

    SciTech Connect

    Dmorvska, R.; Li, V.C.

    1982-04-01

    A mechanical model is presented of precursory source processes for some large earthquakes along plate boundaries. It is assumed that the pre-seismic period consists of the upward progression of a zone of slip from lower portions of the lithosphere towards the Earth's surface. The slip front is blocked by local asperities of different size and strength; these asperities may be zones of real alteration of inherent strength, or instead may be zones which are currently stronger due to a local slowdown of a basically rate-dependent frictional response. Such blocking by a single, large asperity, or array of asperities, produces quiescence over a segment of plate boundary, until gradual increase of the stress concentration forces the slip zone through the blocked region at one end of the gap, thus nucleating a seismic rupture that propogates upwards and towards the other end. This model is proposed to explain certain distinctive seismicity patterns that have been observed before large earthquakes, notably quiescence over the gap zone followed by clustering at its end prior to the main event. A discussion of mechanical factors influencing the process is presented and some introductory modelling, performed with the use of a generalized Elsasser model for lithospheric plates and the ''line spring'' model for part-through flaws (slip zones) at plate boundaries, is outlined briefly.

  19. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  20. Possibility of short-term probabilistic forecasts for large earthquakes making good use of the limitations of existing catalogs

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Iwayama, Koji; Aihara, Kazuyuki

    2016-10-01

    Earthquakes are quite hard to predict. One of the possible reasons can be the fact that the existing catalogs of past earthquakes are limited at most to the order of 100 years, while their characteristic time scale is sometimes greater than that time span. Here we rather use these limitations positively and characterize some large earthquake events as abnormal events that are not included there. When we constructed probabilistic forecasts for large earthquakes in Japan based on similarity and difference to their past patterns—which we call known and unknown abnormalities, respectively—our forecast achieved probabilistic gains of 5.7 and 2.4 against a time-independent model for main shocks with the magnitudes of 7 or above. Moreover, the two abnormal conditions covered 70% of days whose maximum magnitude was 7 or above.

  1. Spatial correlation of large historical earthquakes and moderate shocks >10 km deep in eastern North America

    SciTech Connect

    Acharya, H.

    1980-12-01

    A good spatial correlation is noted between historical earthquakes with epicentral intensity > or =VIII (MM) and recent moderate size earthquakes with focal depth >10 km, suggesting that large historical earthquakes in eastern North America may be associated with deep-seated faults

  2. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  3. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  4. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  5. Earthquakes

    MedlinePlus

    ... Thunderstorms & Lightning Tornadoes Tsunamis Volcanoes Wildfires Main Content Earthquakes Earthquakes are sudden rolling or shaking events caused ... at any time of the year. Before An Earthquake Look around places where you spend time. Identify ...

  6. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  7. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  8. Source scaling relationships of small earthquakes estimated from the inversion method using stopping phases

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Takeo, M.; Ito, H.; Ellsworth, W.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.

    2002-12-01

    attenuation in the crust. This is consistent with the conclusion by Stork et al. (2002) inferred from the spectral analysis using the 800m deep borehole data. The average values of rupture velocity do not depend on earthquake size, and are similar to those reported for moderate and large earthquakes. We then calculate the seismic energy following Sato and Hirasawa (1973). The magnitude scaling of the apparent stress is almost constant in the analyzed events, ranging from 0.05 to 1 MPa. Since most of apparent stresses for large earthquakes are in the range of 0.1 to 10 MPa, there may be small differences in apparent stress between large and small earthquakes. However, it is likely that earthquakes are self-similar over a wide range of earthquake size and the dynamics of small and large earthquakes are similar from a macroscopic viewpoint.

  9. Earthquake Source Scaling and Wave Propagation in Eastern North America: The Au Sable Forks, NY, Earthquake

    NASA Astrophysics Data System (ADS)

    Viegas, G.; Abercrombie, R.; Baise, L.; Kim, W.

    2005-12-01

    The 2002, M5 Au Sable Forks, NY earthquake and aftershocks are the best recorded sequence in the North Eastern USA. We use the local and regional recordings to investigate the characteristics of intraplate seismicity, focusing on source scaling relationships and regional wave propagation. A portable local network of 11 stations, recorded 74 aftershocks of M<3.2. We relocate the mainshock and early aftershocks using a master event technique. We then use the double-difference relocation method using differential travel times measured from waveform cross-correlation to relocate the aftershocks recorded by the local network. Both the master-event and double-difference location methods produce consistent results suggesting complex conjugate faulting during the sequence. We identify a number of highly clustered groups of earthquakes suitable for EGF analysis. We use the EGF method to calculate the stress drop and radiated energy of the larger aftershocks to determine how they compare to moderate magnitude earthquakes, and also whether they differ significantly from interplate earthquakes. We consider the 9 largest aftershocks (M3.7 to M2), which were recorded on the regional network, as potential EGFs for the mainshock, but they have focal mechanisms and locations that are sufficiently different that we cannot resolve the mainshock source time function well. They are good enough to enable us to place constraints on the shape and duration of the source pulse to use in modeling the regional waveforms. We investigate the crustal structure in New York (Grenville) and New England (Appalachian) through forward modeling of the Au Sable Forks regional broadband records. We compute synthetic records of wave propagation in a layered medium, using published crustal models of the two regions as a starting point. We identify differences between the recorded data and synthetics for the Grenville and the Appalachian regions and improve the crustal models to better fit the recorded

  10. Earthquake Monitoring at Different Scales with Seiscomp3

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Engels, F.

    2013-12-01

    In the last few years, the French National Network of Seismic Survey (BCSF-RENASS) had to modernize its old and aging earthquake monitoring system coming from an inhouse developement. After having tried and conducted intensive tests on several real time frameworks such as EarthWorm and Seiscomp3 we have finaly adopted in 2012 Seiscomp3. Our actual system runs with two pipelines in parallel: the first one is tuned at a global scale to monitor the world seismicity (for event's magnitude > 5.5) and the second one is tuned at a national scale for the monitoring of the metropolitan France. The seismological stations used for the "world" pipeline are coming mainly from Global Seismographic Network (GSN), whereas for the "national" pipeline the stations are coming from the RENASS short period network and from the RESIF broadband network. More recently we have started to tune seiscomp3 at a smaller scale to monitor in real time the geothermal project (a R&D program in Deep Geothermal Energy) in the North-East part of France. Beside the use of the real time monitoring capabilities of Seiscomp3 we have also used a very handy feature to playback a 4 month length dataset at a local scale for the Rambervillers earthquake (22/02/2003, Ml=5.4) leading to the build of roughly 2000 aftershock's detections and localisations.

  11. Access Time of Emergency Vehicles Under the Condition of Street Blockages after a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Hirokawa, N.; Osaragi, T.

    2016-09-01

    The previous studies have been carried out on accessibility in daily life. However it is an important issue to improve the accessibility of emergency vehicles after a large earthquake. In this paper, we analyzed the accessibility of firefighters by using a microscopic simulation model immediately after a large earthquake. More specifically, we constructed the simulation model, which describes the property damage, such as collapsed buildings, street blockages, outbreaks of fires, and fire spreading, and the movement of firefighters from fire stations to the locations of fires in a large-scale earthquake. Using this model, we analyzed the influence of the street-blockage on the access time of firefighters. In case streets are blocked according to property damage simulation, the result showed the average access time is more than 10 minutes in the outskirts of the 23 wards of Tokyo, and there are some firefighters arrive over 20 minutes at most. Additionally, we focused on the alternative routes and proposed that volunteers collect information on street blockages to improve the accessibility of firefighters. Finally we demonstrated that access time of firefighters can be reduced to the same level as the case no streets were blocked if 0.3% of residents collected information in 10 minutes.

  12. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models.

    PubMed

    Landes, François P; Lippiello, E

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  13. Local near instantaneously dynamically triggered aftershocks of large earthquakes.

    PubMed

    Fan, Wenyuan; Shearer, Peter M

    2016-09-09

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks.

  14. Unified scaling law for earthquakes in Crimea and Northern Caucasus

    NASA Astrophysics Data System (ADS)

    Nekrasova, A. K.; Kossobokov, V. G.

    2016-10-01

    This study continues detailed investigations on the construction of regional charts of the parameters of the generalized Guttenberg-Richter Law, which takes into account the properties of the spatiotemporal seismic energy scaling. We analyzed the parameters of the law in the vicinity of the intersections of morphostructural lineaments in Crimea and Greater Caucasus. It was shown that ignoring the fractal character of the spatial distribution of earthquakes in the southern part of the Russian Federation can lead to significant underestimation of the seismic hazard in the largest cities of the region.

  15. EVIDENCE FOR THREE MODERATE TO LARGE PREHISTORIC HOLOCENE EARTHQUAKES NEAR CHARLESTON, S. C.

    USGS Publications Warehouse

    Weems, Robert E.; Obermeier, Stephen F.; Pavich, Milan J.; Gohn, Gregory S.; Rubin, Meyer; Phipps, Richard L.; Jacobson, Robert B.

    1986-01-01

    Earthquake-induced liquefaction features (sand blows), found near Hollywood, S. C. , have yielded abundant clasts of humate-impregnated sand and sparse pieces of wood. Radiocarbon ages for the humate and wood provide sufficient control on the timing of the earthquakes that produced the sand blows to indicate that at least three prehistoric liquefaction-producing earthquakes (m//b approximately 5. 5 or larger) have occurred within the last 7,200 years. The youngest documented prehistoric earthquake occurred around 800 A. D. A few fractures filled with virtually unweathered sand, but no large sand blows, can be assigned confidently to the historic 1886 Charleston earthquake.

  16. The precursory fault width formation of large earthquakes

    NASA Astrophysics Data System (ADS)

    Takeda, Fumihide; Takeo, Makoto

    2010-03-01

    We collect earthquake (EQ) events for a region of about 5 degree mesh from a focus catalog of Japan with a regionally dependent magnitude window of M >= 3-3.5. The time history of the events draws a zigzagged trajectory in a five dimensional space of EQ epicenter, focal depth (DEP), inter-event interval (INT), and magnitude (MAG). Its components are the time series of the EQ source parameters for which time is the chronological event index. Each series has long-term memory and evidence of deterministic chaos. We thus use physical wavelets (P-Ws) to find the process producing large EQs. The P-Ws convert the moving-average of each series, its first and second order differences at any interval into the displacement, velocity and acceleration (A) in selective frequency region, respectively. The process starts with two unique different triple phase couplings of A on source parameters DEP, INT, and MAG, precursory to every large EQ's (M > about 6) throughout Japan. Each coupling then creates a linear DEP variation (W) on its series, which becomes comparable to the fault width of large EQ's. It suggests that the variation exerts the corresponding shear stress on a local plane in Earth's crust to form the fault plane of width W, rupturing a large EQ.

  17. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  18. A scaling relationship between AE and natural earthquakes

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, N.; Kawakata, H.; Takahashi, N.

    2013-12-01

    seismic moments and the corner frequencies by grid search. The magnitude of AE events were estimated between -8 to -7. As a result, the relationship between the seismic moment and the corner frequency of AE also satisfied the same scaling relationship as shown for natural earthquakes. This indicates that AE in rock samples can be regarded as micro size earthquake. This finding shows the possibility to understand the developing processes of natural earthquake from laboratory experiments.

  19. Large-Scale Sequence Comparison.

    PubMed

    Lal, Devi; Verma, Mansi

    2017-01-01

    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  20. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  1. Oceanic transform fault earthquake nucleation process and source scaling relations - A numerical modeling study with rate-state friction (Invited)

    NASA Astrophysics Data System (ADS)

    Liu, Y.; McGuire, J. J.; Behn, M. D.

    2013-12-01

    We use a three-dimensional strike-slip fault model in the framework of rate and state-dependent friction to investigate earthquake behavior and scaling relations on oceanic transform faults (OTFs). Gabbro friction data under hydrothermal conditions are mapped onto OTFs using temperatures from (1) a half-space cooling model, and (2) a thermal model that incorporates a visco-plastic rheology, non-Newtonian viscous flow and the effects of shear heating and hydrothermal circulation. Without introducing small-scale frictional heterogeneities on the fault, our model predicts that an OTF segment can transition between seismic and aseismic slip over many earthquake cycles, consistent with the multimode hypothesis for OTF ruptures. The average seismic coupling coefficient χ is strongly dependent on the ratio of seismogenic zone width W to earthquake nucleation size h*; χ increases by four orders of magnitude as W/h* increases from ~ 1 to 2. Specifically, the average χ = 0.15 +/- 0.05 derived from global OTF earthquake catalogs can be reached at W/h* ≈ 1.2-1.7. The modeled largest earthquake rupture area is less than the total seismogenic area and we predict a deficiency of large earthquakes on long transforms, which is also consistent with observations. Earthquake magnitude and distribution on the Gofar (East Pacific Rise) and Romanche (equatorial Mid-Atlantic) transforms are better predicted using the visco-plastic model than the half-space cooling model. We will also investigate how fault gouge porosity variation during an OTF earthquake nucleation phase may affect the seismic wave velocity structure, for which up to 3% drop was observed prior to the 2008 Mw6 Gofar earthquake.

  2. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  3. Determining the Uncertainty Range of Coseismic Stress Drop of Large Earthquakes Using Finite Fault Inversion

    NASA Astrophysics Data System (ADS)

    Adams, M.; Ji, C.; Twardzik, C.; Archuleta, R. J.

    2015-12-01

    A key component in understanding the physics of earthquakes is the resolution of the state of stress on the fault before, during and after the earthquake. A large earthquake's average stress drop is the first order parameter for this task but is still poorly constrained, especially for intermediate and deep events. Classically, the average stress drop is estimated using the corner frequency of observed seismic data. However a simple slip distribution is implicitly assumed; this assumed distribution is often not appropriate for large earthquakes. The average stress drop can be calculated using the inverted finite fault slip model. However, conventional finite fault inversion methods do not directly invert for on-fault stress change; thus it is unclear whether models with significantly different stress drops can match the observations equally well. We developed a new nonlinear inversion to address this concern. The algorithm searches for the solution matching the observed seismic and geodetic data under the condition that the average stress drop is close to a pre-assigned value. We perform inversions with different pre-assigned stress drops to obtain the relationship between the average stress drop of the inverted slip model and the minimum waveform misfit. As an example, we use P and SH displacement waveforms recorded at teleseismic distances from the 2014 Mw 7.9 Rat Island intermediate depth earthquake to determine its average stress drop. Earth responses up to 2 Hz are calculated using an FK algorithm and the PREM velocity structure. Our preliminary analysis illustrates that with this new approach, we are able to define the lower bound of the average stress drop but fail to constrain its upper bound. The waveform misfit associated with the inverted model increases quickly as pre-assigned stress drop decreases from 3 MPa to 0.5 MPa. But the misfit varies negligibly when the pre-assigned stress drop increases from 4.0 MPa to 50 MPa. We notice that the fine-scale

  4. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  5. Earthquakes

    MedlinePlus

    ... and Cleanup Workers Hurricanes PSAs ASL Videos: Hurricanes Landslides & Mudslides Lightning Lightning Safety Tips First Aid Recommendations ... Disasters & Severe Weather Earthquakes Extreme Heat Floods Hurricanes Landslides Tornadoes Tsunamis Volcanoes Wildfires Winter Weather Earthquakes Language: ...

  6. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  7. Very short-term earthquake precursors from GPS signal interference: Case studies on moderate and large earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, Yu-Lien; Cheng, Kai-Chien; Wang, Wei-Hau; Yu, Shui-Beih

    2016-04-01

    We set up a GPS network with 17 Continuous GPS (CGPS) stations in southwestern Taiwan to monitor real-time crustal deformation. We found that systematic perturbations in GPS signals occurred just a few minutes prior to the occurrence of several moderate and large earthquakes, including the recent 2013 Nantou (ML = 6.5) and Rueisuei (ML = 6.4) earthquakes in Taiwan. The anomalous pseudorange readings were several millimeters higher or lower than those in the background time period. These systematic anomalies were found as a result of interference of GPS L-band signals by electromagnetic emissions (EMs) prior to the mainshocks. The EMs may occur in the form of harmonic or ultra-wide-band radiation and can be generated during the formation of Mode I cracks at the final stage of earthquake nucleation. We estimated the directivity of the likely EM sources by calculating the inner product of the position vector from a GPS station to a given satellite and the vector of anomalous ground motions recorded by the GPS. The results showed that the predominant inner product generally occurred when the satellite was in the direction either toward or away from the epicenter with respect to the GPS network. Our findings suggest that the GPS network may serve as a powerful tool to detect very short-term earthquake precursors and presumably to locate a large earthquake before it occurs.

  8. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  9. Variability in Ground Motions in the San Francisco Bay Urban Area from Large Earthquakes on the San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Aagaard, B. T.

    2006-12-01

    I use 3-D numerical simulations of kinematic earthquake ruptures to characterize the expected long period (T > 2.0 s) strong ground motions in the San Francisco Bay urban area from large earthquakes on the San Andreas fault. The earthquakes include the 1906 M7.9 San Francisco earthquake and hypothetical variations of the 1906 event with different hypocenters, slip distributions, and rupture speeds. The simulations use finite-elements to discretize a 250 km by 110 km by 45 km volume centered around the San Francisco Bay metropolitan area. Using the USGS 3-D geologic model and corresponding velocity model, the simulations incorporate the 3-D geologic structure, including the nonplanar geometry of the faults, the variation in material properties associated with different rock types and depth, and topography and bathymetry. The simulations suggest that much of the currently urbanized area around San Francisco Bay could be subjected to significantly stronger ground motions in the next large earthquake on the northern San Andreas fault compared with the motions in the simulation of the 1906 event. A hypocenter north of the 1906 hypocenter, which was directly off the coast of San Francisco, increases the rupture directivity for the city of San Francisco and cities around the southern half of the bay, raising the MMI one unit over much of the urban area. Alternatively, if instead of having less than average slip along the San Francisco peninsula as in the 1906 earthquake, this portion of the rupture has greater than average slip, the peninsula is subjected to significantly stronger shaking. In addition to these large-scale effects, some smaller scale effects, such as locally intense shaking in the Cupertino and Santa Rosa areas due to sedimentary basins, are present in all of the scenarios. These results corroborate previous studies that show that variations in rupture directivity and slip have a strong influence on the distribution of ground shaking in areas with complex

  10. Calibration of the landsliding numerical model SLIPOS and prediction of the seismically induced erosion for several large earthquakes scenarios

    NASA Astrophysics Data System (ADS)

    Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark

    2016-04-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.

  11. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  12. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  13. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake.

    PubMed

    Donnellan, Andrea; Grant Ludwig, Lisa; Parker, Jay W; Rundle, John B; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott

    2015-09-01

    Tectonic motion across the Los Angeles region is distributed across an intricate network of strike-slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933 M6.4 Long Beach and 1994 M6.7 Northridge events. Here we show that Los Angeles regional thrust, strike-slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north-south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left-lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still-locked deeper structures. A future M6.1-6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping.

  14. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake

    PubMed Central

    Grant Ludwig, Lisa; Parker, Jay W.; Rundle, John B.; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott

    2015-01-01

    Abstract Tectonic motion across the Los Angeles region is distributed across an intricate network of strike‐slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933 M6.4 Long Beach and 1994 M6.7 Northridge events. Here we show that Los Angeles regional thrust, strike‐slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north‐south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left‐lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still‐locked deeper structures. A future M6.1–6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping. PMID:27981074

  15. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    PubMed Central

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  16. Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, Tom

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  17. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change |Δ| 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  18. Scaling earthquake ground motions in western Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    Akinci, Aybige; D'Amico, Sebastiano; Malagnini, Luca; Mercuri, Alessia

    In this study, we provide a complete description of the ground-motion characteristics of the western Anatolia region of Turkey. The attenuation of ground motions with distance and the variability in excitation with magnitude are parameterized using three-component 0.25-10.0 Hz earthquake ground motions at distances of 15-250 km. The data set is comprised of more than 11,600 three-component seismograms from 902 regional earthquakes of local magnitude (ML) 2.5-5.8, recorded during the Western Anatolia Seismic Recording Experiment (WASRE) between November 2002 and October 2003. We used regression analysis to relate the logarithm of measured ground motion to the excitation, site, and propagation effects. Instead of trying to reproduce the details of the high-frequency ground motion in the time domain, we use a source model and a regional scaling law to predict the spectral shape and amplitudes of ground motion at various source-receiver distances. We fit a regression to the peak values of narrow bandpass filtered ground velocity time histories, and root mean square and RMS-average Fourier spectral amplitudes for a range of frequencies to define regional attenuation functions characterized by piece-wise linear geometric spreading (in log-log space) and a frequency-dependent crustal Q(f). An excitation function is also determined, which contains the competing effects of an effective stress parameter Δσ and a high-frequency attenuation term exp(-πκf). The anelastic attenuation coefficient for the entire region is given by Q(f) = 180f0.55. The duration of motion for each record is defined as the value that yields the observed relationship between time-domain and spectral-domain amplitudes, according to random process theory. Anatolian excitation spectra are calibrated for our empirical results by using a Brune model with a stress drop of 10 MPa for the largest event in our data set (Mw 5.8) and a near-surface attenuation parameter of κ = 0.045 s. These quantities

  19. The characteristic of the building damage from historical large earthquakes in Kyoto

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  20. Some Considerations on a Large Landslide at the Left Bank of the Aratozawa Dam Caused by the 2008 Iwate-Miyagi Intraplate Earthquake

    NASA Astrophysics Data System (ADS)

    Aydan, Ömer

    2016-06-01

    The scale and impact of rock slope failures are very large and the form of failure differs depending upon the geological structures of slopes. The 2008 Iwate-Miyagi intraplate earthquake induced many large-scale slope failures, despite the magnitude of the earthquake being of intermediate scale. Among large-scale slope failures, the landslide at the left bank of the Aratozawa Dam site is of great interest to specialists of rock mechanics and rock engineering. Although the slope failure was of planar type, the direction of sliding was luckily towards the sub-valley, so that the landslide did not cause great tsunami-like motion of reservoir fluid. In this study, the author attempts to describe the characteristics of the landslide, strong motion and permanent ground displacement induced by the 2008 Iwate-Miyagi intraplate earthquake, which had great effects on the triggering and evolution of the landslide.

  1. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  2. Gravity Wave Disturbances in the F-Region Ionosphere Above Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bruff, Margie

    The direction of propagation, duration and wavelength of gravity waves in the ionosphere above large earthquakes were studied using data from the Super Dual Auroral Radar Network. Ground scatter data were plotted versus range and time to identify gravity waves as alternating focused and de-focused regions of radar power in wave-like patterns. The wave patterns before and after earthquakes were analyzed to determine the directions of propagation and wavelengths. Conditions were considered 48 hours before and after each identified disturbances to exclude waves from geomagnetic activity. Gravity waves were found travelling away from the epicenter before all six earthquakes for which data were available and after four of the six earthquakes. Gravity waves travelled in at least two directions away from the epicenter in all cases, and even stronger patterns were found for two earthquakes. Waves appeared, on average, 4 days before, persisting 2-3 hours, and 1-2 days after earthquakes, persisting 4-6 hours. Most wavelengths were between 200-300 km. We show a possible correlation between magnitude and depth of earthquakes and gravity wave patterns, but study of more earthquakes is required. This study provides a better understanding of the causes of ionospheric gravity wave disturbances and has potential applications for predicting earthquakes.

  3. The AD 365 earthquake: high resolution tsunami inundation for Crete and full scale simulation exercise

    NASA Astrophysics Data System (ADS)

    Kalligeris, N.; Flouri, E.; Okal, E.; Synolakis, C.

    2012-04-01

    In the eastern Mediterranean, historical and archaeological records document major earthquake and tsunami events in the past 2000 year (Ambraseys and Synolakis, 2010). The 1200km long Hellenic Arc has allegedly caused the strongest reported earthquakes and tsunamis in the region. Among them, the AD 365 and AD 1303 tsunamis have been extensively documented. They are likely due to ruptures of the Central and Eastern segments of the Hellenic Arc, respectively. Both events had widespread impact due to ground shaking, and e triggered tsunami waves that reportedly affected the entire eastern Mediterranean. The seismic mechanism of the AD 365 earthquake, located in western Crete, has been recently assigned a magnitude ranging from 8.3 to 8.5 by Shaw et al., (2008), using historical, sedimentological, geomorphic and archaeological evidence. Shaw et al (2008) have inferred that such large earthquakes occur in the Arc every 600 to 800 years, with the last known the AD 1303 event. We report on a full-scale simulation exercise that took place in Crete on 24-25 October 2011, based on a scenario sufficiently large to overwhelm the emergency response capability of Greece and necessitating the invocation of the Monitoring and Information Centre (MIC) of the EU and triggering help from other nations . A repeat of the 365 A.D. earthquake would likely overwhelm the civil defense capacities of Greece. Immediately following the rupture initiation it will cause substantial damage even to well-designed reinforced concrete structures in Crete. Minutes after initiation, the tsunami generated by the rapid displacement of the ocean floor would strike nearby coastal areas, inundating great distances in areas of low topography. The objective of the exercise was to help managers plan search and rescue operations, identify measures useful for inclusion in the coastal resiliency index of Ewing and Synolakis (2011). For the scenario design, the tsunami hazard for the AD 365 event was assessed for

  4. Coseismic Slip Distributions of Great or Large Earthquakes in the Northern Japan to Kurile Subduction Zone

    NASA Astrophysics Data System (ADS)

    Harada, T.; Satake, K.; Ishibashi, K.

    2011-12-01

    Slip distributions of great and large earthquakes since 1963 along the northern Japan and Kuril trenches are examined to study the recurrence of interplate, intraslab and outer-rise earthquakes. The main findings are that the large earthquakes in 1991 and 1995 reruptured the 1963 great Urup earthquake source, and the 2006, 2007 and 2009 Simshir earthquakes were all different types. We also identify three seismic gaps. The northern Japan to southern Kurile trenches have been regarded as a typical subduction zone with spatially and temporally regular recurrence of great (M>8) interplate earthquakes. The source regions were grouped into six segments by Utsu (1972; 1984). The Headquarters for Earthquake Research Promotion of the Japanese government (2004) divided the southern Kurile subduction zone into four regions and evaluated future probabilities of great interplate earthquakes. Besides great interplate events, however, many large (M>7) interplate, intraslab, outer-rise and tsunami earthquakes have also occurred in this region. Harada, Ishibashi, and Satake (2010, 2011) depicted the space-time pattern of M>7 earthquakes along the northern Japan to Kuril trench, based on the relocated mainshock-aftershock distributions of all types of earthquakes occurred since 1913. The space-time pattern is more complex than that had been considered conventionally. Each region has been ruptured by a M8-class interplate earthquake or by multiple M7-class events. In this study, in order to examine more detail space pattern, or rupture areas, of M>7 earthquakes since 1963 (WWSSN waveform data have been available since this year), we estimated cosiesmic slip distributions by the Kikuchi and Kanamori's (2003) teleseismic body wave inversion method. The WWSSN waveform data were used for earthquakes before 1990, and digital teleseismic waveform data compiled by the IRIS were used for events after 1990. Main-shock hypocenters that had been relocated by our previous study were used as

  5. Interseismic Coupling Models and their interactions with the Sources of Large and Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Chlieh, M.; Perfettini, H.; Avouac, J. P.

    2009-04-01

    Recent observations of heterogeneous strain build up reported from subduction zones and seismic sources of large and great interplate earthquakes indicate that seismic asperities are probably persistent features of the megathrust. The Peru Megathrust produce recurrently large seismic events like the 2001 Mw 8.4, Arequipa earthquake or the 2007 Mw 8.0, Pisco earthquake. The peruvian subduction zone provide an exceptional opportunity to understand the eventual relationship between interseismic coupling, large megathrust ruptures and the frictional properties of the megathrust. An emerging concept is a megathrust with strong locked fault patches surrounded by aseismic slip. The 2001, Mw 8.4 Arequipa earthquake ruptured only the northern portion of the patch that had ruptured already during the great 1868 Mw~8.8 earthquake and that had remained locked in the interseismic period. The 2007 Mw 8.0 Pisco earthquake ruptured the southern portion of the 1746 Mw~8.5 event. The moment released in 2007 amounts to only a small fraction of the deficit of moment that had accumulated since the 1746 great earthquake. Then, the potential for future large megathrust events in Central and Southern Peru area remains large. These recent earthquakes indicate that a same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or jointly to produce a larger rupture. The spatial distribution of frictional properties of the megathrust could be the cause for a more complex earthquakes sequence from one seismic cycle to another. The subduction of geomorphologic structure like the Nazca ridge could be the cause for a lower coupling there.

  6. Preliminary investigation of some large landslides triggered by the 2008 Wenchuan earthquake, Sichuan Province, China

    USGS Publications Warehouse

    Wang, F.; Cheng, Q.; Highland, L.; Miyajima, M.; Wang, Hongfang; Yan, C.

    2009-01-01

    The M s 8.0 Wenchuan earthquake or "Great Sichuan Earthquake" occurred at 14:28 p.m. local time on 12 May 2008 in Sichuan Province, China. Damage by earthquake-induced landslides was an important part of the total earthquake damage. This report presents preliminary observations on the Hongyan Resort slide located southwest of the main epicenter, shallow mountain surface failures in Xuankou village of Yingxiu Town, the Jiufengchun slide near Longmenshan Town, the Hongsong Hydro-power Station slide near Hongbai Town, the Xiaojiaqiao slide in Chaping Town, two landslides in Beichuan County-town which destroyed a large part of the town, and the Donghekou and Shibangou slides in Qingchuan County which formed the second biggest landslide lake formed in this earthquake. The influences of seismic, topographic, geologic, and hydro-geologic conditions are discussed. ?? 2009 Springer-Verlag.

  7. Three-dimensional distribution of ionospheric anomalies prior to three large earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    He, Liming; Heki, Kosuke

    2016-07-01

    Using regional Global Positioning System (GPS) networks, we studied three-dimensional spatial structure of ionospheric total electron content (TEC) anomalies preceding three recent large earthquakes in Chile, South America, i.e., the 2010 Maule (Mw 8.8), the 2014 Iquique (Mw 8.2), and the 2015 Illapel (Mw 8.3) earthquakes. Both positive and negative TEC anomalies, with areal extent dependent on the earthquake magnitudes, appeared simultaneously 20-40 min before the earthquakes. For the two midlatitude earthquakes (2010 Maule and 2015 Illapel), positive anomalies occurred to the north of the epicenters at altitudes 150-250 km. The negative anomalies occurred farther to the north at higher altitudes 200-500 km. This lets the epicenter, the positive and negative anomalies align parallel with the local geomagnetic field, which is a typical structure of ionospheric anomalies occurring in response to positive surface electric charges.

  8. Investigation of the Seismic Nucleation Phase of Large Earthquakes Using Broadband Teleseismic Data

    NASA Astrophysics Data System (ADS)

    Burkhart, Eryn Therese

    The dynamic motion of an earthquake begins abruptly, but is often initiated by a short interval of weak motion called the seismic nucleation phase (SNP). Ellsworth and Beroza [1995, 1996] concluded that the SNP was detectable in near-source records of 48 earthquakes with moment magnitude (Mw), ranging from 1.1 to 8.1. They found that the SNP accounted for approximately 0.5% of the total moment and 1/6 of the duration of the earthquake. Ji et al [2010] investigated the SNP of 19 earthquakes with Mw greater than 8.0 using teleseismic broadband data. This study concluded that roughly half of the earthquakes had detectable SNPs, inconsistent with the findings of Ellsworth and Beroza [1995]. Here 69 earthquakes of Mw 7.5-8.0 from 1994 to 2011 are further examined. The SNP is clearly detectable using teleseismic data in 32 events, with 35 events showing no nucleation phase, and 2 events had insufficient data to perform stacking, consistent with the previous analysis. Our study also reveals that the percentage of the SNP events is correlated with the focal mechanism and hypocenter depths. Strike-slip earthquakes are more likely to exhibit a clear SNP than normal or thrust earthquakes. Eleven of 14 strike-slip earthquakes (78.6%) have detectable NSPs. In contrast, only 16 of 40 (40%) thrust earthquakes have detectable SNPs. This percentage also became smaller for deep events (33% for events with hypocenter depth>250 km). To understand why certain thrust earthquakes have a visible SNP, we examined the sediment thickness, age, and angle of the subducting plate of all thrust earthquakes in the study. We found that thrust events with shallow (600 m) on the subducting plate tend to have clear SNPs. If the SNP can be better understood in the future, it may help seismologists better understand the rupture dynamics of large earthquakes. Potential applications of this work could attempt to predict the magnitude of an earthquake seconds before it begins by measuring the SNP, vastly

  9. Exploring the uncertainty range of co-seismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2016-10-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average co-seismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same dataset exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that 1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated; 2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  10. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  11. Dynamic Triggering of Earthquakes in the Salton Sea Region of Southern California from Large Regional and Teleseismic Earthquakes

    NASA Astrophysics Data System (ADS)

    Doran, A.; Meng, X.; Peng, Z.; Wu, C.; Kilb, D. L.

    2010-12-01

    We perform a systematic survey of dynamically triggered earthquakes in the Salton Sea region of southern California using borehole seismic data recordings (2007 to present). We define triggered events as high-frequency seismic energy during large-amplitude seismic waves of distant earthquakes. Our mainshock database includes 26 teleseismic events (epicentral distances > 1000 km; Mw ≥ 7.5), and 8 regional events (epicentral distances 100 - 1000 km; Mw ≥ 5.5). Of these, 1 teleseismic and 7 regional events produce triggered seismic activity within our study region. The triggering mainshocks are not limited to specific azimuths. For example, triggering is observed following the 2008 Mw 6.0 Nevada earthquake to the north and the 2010 Mw7.2 Northern Baja California earthquake to the south. The peak ground velocities in our study region generated by the triggering mainshocks exceed 0.03 cm/s, which corresponds to a dynamic stress of ~2 kPa. This apparent triggering threshold is consistent with thresholds found in the Long Valley Caldera (Brodsky and Prejean, 2005), the Parkfield section of San Andreas Fault (Peng et al., 2009), and near the San Jacinto Fault (Kane et al., 2007). The triggered events occur almost instantaneously with the arrival of large amplitude seismic waves and appear to be modulated by the passing surface waves, similar to recent observations of triggered deep “non-volcanic” tremor along major plate boundary faults in California, Cascadia, Japan, and Taiwan (Peng and Gomberg, 2010). However, unlike these deep ‘tremor’ events, the triggered signals we find in this study have very short P- to S-arrival times, suggesting that they likely originate from brittle failure in the shallow crust. Confirming this, spectra of the triggered signals mimic spectra of typical shallow events in the region. Extending our observation time window to ~1 month following the mainshock event we find that for the 2010 Mw 7.2 Northern Baja California mainshock

  12. Large-scale Digitoxin Intoxication

    PubMed Central

    Lely, A. H.; Van Enter, C. H. J.

    1970-01-01

    Because of an error in the manufacture of digoxin tablets a large number of patients took tablets that contained 0·20 mg. of digitoxin and 0·05 mg. of digoxin instead of the prescribed 0·25 mg. of digoxin. The symptoms are described of 179 patients who took these tablets and suffered from digitalis intoxication. Of these patients, 125 had taken the faultily composed tablets for more than three weeks. In 48 patients 105 separate disturbances in rhythm or in atrioventricular conduction were observed on the electrocardiogram. Extreme fatigue and serious eye conditions were observed in 95% of the patients. Twelve patients had a transient psychosis. Extensive ophthalmological observations indicated that the visual complaints were most probably caused by a transient retrobulbar neuritis. PMID:5273245

  13. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  14. Benefits of Earthquake Early Warning to Large Municipalities (Invited)

    NASA Astrophysics Data System (ADS)

    Featherstone, J.

    2013-12-01

    The City of Los Angeles has been involved in the testing of the Cal Tech Shake Alert, Earthquake Early Warning (EQEW) system, since February 2012. This system accesses a network of seismic monitors installed throughout California. The system analyzes and processes seismic information, and transmits a warning (audible and visual) when an earthquake occurs. In late 2011, the City of Los Angeles Emergency Management Department (EMD) was approached by Cal Tech regarding EQEW, and immediately recognized the value of the system. Simultaneously, EMD was in the process of finalizing a report by a multi-discipline team that visited Japan in December 2011, which spoke to the effectiveness of EQEW for the March 11, 2011 earthquake that struck that country. Information collected by the team confirmed that the EQEW systems proved to be very effective in alerting the population of the impending earthquake. The EQEW in Japan is also tied to mechanical safeguards, such as the stopping of high-speed trains. For a city the size and complexity of Los Angeles, the implementation of a reliable EQEW system will save lives, reduce loss, ensure effective and rapid emergency response, and will greatly enhance the ability of the region to recovery from a damaging earthquake. The current Shake Alert system is being tested at several governmental organizations and private businesses in the region. EMD, in cooperation with Cal Tech, identified several locations internal to the City where the system would have an immediate benefit. These include the staff offices within EMD, the Los Angeles Police Department's Real Time Analysis and Critical Response Division (24 hour crime center), and the Los Angeles Fire Department's Metropolitan Fire Communications (911 Dispatch). All three of these agencies routinely manage the collaboration and coordination of citywide emergency information and response during times of crisis. Having these three key public safety offices connected and included in the

  15. Large Scale Metal Additive Techniques Review

    SciTech Connect

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W; Love, Lonnie J

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  16. Large Chilean earthquakes and tsunamis of 1730 and 1751: new analysis of historical data

    NASA Astrophysics Data System (ADS)

    Udias, Agustin; Buforn, Elisa; Madariaga, Raul

    2013-04-01

    A large collection of contemporary documents from the Archivo de Indias (Seville, Spain) concerning the large Chilean earthquakes and tsunamis of 1730 and 1751 has been studied for the first time. The documents include official and private letters to the King of Spain, and proceedings, memorials and reports of the colonial administration. They provide detailed information about the characteristics and the damage produced by these two mega earthquakes. The 1730, the largest of the two earthquakes, with an estimated magnitude close to Mw = 9, affected a large region of more than 900 km length from Copiapó in the north to Concepción in the south, causing important damage in the capital Santiago. It was followed by a large tsunami which affected especially the two coastal cities of Valparaiso and Concepción. Twenty one years later in 1751, another earthquake caused damage to the region from Santiago to Valdivia. The tsunami destroyed again the city of Concepción and made necessary its relocation from the site at the town of Penco to its present site on the BioBio river. We suggest that this event was very similar in size and extent to that of Maule in 27 February 2010. It is estimated that the two earthquakes together broke the entire plate boundary in central Chile, along almost 900 km, from 30°S to 38°S. A possible repeat of the 1730 earthquake in the future presents a major risk for Central Chile.

  17. What is a large-scale dynamo?

    NASA Astrophysics Data System (ADS)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  18. Repeating and not so Repeating Large Earthquakes in the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Singh, S.; Iglesias, A.; Perez-Campos, X.

    2013-12-01

    The rupture area and recurrence interval of large earthquakes in the mexican subduction zone are relatively small and almost the entire length of the zone has experienced a large (Mw≥7.0) earthquake in the last 100 years (Singh et al., 1981). Several segments have experienced multiple large earthquakes in this time period. However, as the rupture areas of events prior to 1973 are only approximately known, the recurrence periods are uncertain. Large earthquakes occurred in the Ometepec, Guerrero, segment in 1937, 1950, 1982 and 2012 (Singh et al., 1981). In 1982, two earthquakes (Ms 6.9 and Ms 7.0) occurred about 4 hours apart, one apparently downdip from the other (Astiz & Kanamori, 1984; Beroza et al. 1984). The 2012 earthquake on the other hand had a magnitude of Mw 7.5 (globalcmt.org), breaking approximately the same area as the 1982 doublet, but with a total scalar moment about three times larger than the 1982 doublet combined. It therefore seems that 'repeat earthquakes' in the Ometepec segment are not necessarily very similar one to another. The Central Oaxaca segment broke in large earthquakes in 1928 (Mw7.7) and 1978 (Mw7.7) . Seismograms for the two events, recorded at the Wiechert seismograph in Uppsala, show remarkable similarity, suggesting that in this area, large earthquakes can repeat. The extent to which the near-trench part of the fault plane participates in the ruptures is not well understood. In the Ometepec segment, the updip portion of the plate interface broke during the 25 Feb 1996 earthquake (Mw7.1), which was a slow earthquake and produced anomalously low PGAs (Iglesias et al., 2003). Historical records indicate that a great tsunamigenic earthquake, M~8.6, occurred in the Oaxaca region in 1787, breaking the Central Oaxaca segment together with several adjacent segments (Suarez & Albini 2009). Whether the updip portion of the fault broke in this event remains speculative, although plausible based on the large tsunami. Evidence from the

  19. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  20. Volcanic activity before and after large tectonic earthquakes: Observations and statistical significance

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Walter, Thomas R.

    2009-06-01

    The study of volcanic triggering and interaction with the tectonic surroundings has received special attention in recent years, using both direct field observations and historical descriptions of eruptions and earthquake activity. Repeated reports of clustered eruptions and earthquakes may imply that interaction is important in some subregions. However, the subregions likely to suffer such clusters have not been systematically identified, and the processes responsible for the observed interaction remain unclear. We first review previous works about the clustered occurrence of eruptions and earthquakes, and describe selected events. We further elaborate available databases and confirm a statistically significant relationship between volcanic eruptions and earthquakes on the global scale. Moreover, our study implies that closed volcanic systems in particular tend to be activated in association with a tectonic earthquake trigger. We then perform a statistical study at the subregional level, showing that certain subregions are especially predisposed to concurrent eruption-earthquake sequences, whereas such clustering is statistically less significant in other subregions. Based on this study, we argue that individual and selected observations may bias the perceptible weight of coupling. The activity at volcanoes located in the predisposed subregions (e.g., Japan, Indonesia, Melanesia), however, often unexpectedly changes in association with either an imminent or a past earthquake.

  1. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  2. The typical seismic behavior in the vicinity of a large earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, M. V.; Tikhonov, I. N.

    2016-10-01

    The Global Centroid Moment Tensor catalog (GCMT) was used to construct the spatio-temporal generalized vicinity of a large earthquake (GVLE) and to investigate the behavior of seismicity in GVLE. The vicinity is made of earthquakes falling into the zone of influence of a large number (100, 300, or 1000) of largest earthquakes. The GVLE construction aims at enlarging the available statistics, diminishing a strong random component, and revealing typical features of pre- and post-shock seismic activity in more detail. As a result of the GVLE construction, the character of fore- and aftershock cascades was examined in more detail than was possible without of the use of the GVLE approach. As well, several anomalies in the behavior exhibited by a variety of earthquake parameters were identified. The amplitudes of all these anomalies increase with the approaching time of the generalized large earthquake (GLE) as the logarithm of the time interval from the GLE occurrence. Most of the discussed anomalies agree with common features well expected in the evolution of instability. In addition to these common type precursors, one earthquake-specific precursor was found. The decrease in mean earthquake depth presumably occurring in a smaller GVLE probably provides evidence of a deep fluid being involved in the process. The typical features in the evolution of shear instability as revealed in GVLE agree with results obtained in laboratory studies of acoustic emission (AE). The majority of the anomalies in earthquake parameters appear to have a secondary character, largely connected with an increase in mean magnitude and decreasing fraction of moderate size events (mw5.0-6.0) in the immediate GLE vicinity. This deficit of moderate size events could hardly be caused entirely by their incomplete reporting and can presumably reflect some features in the evolution of seismic instability.

  3. Dynamic Response and Ground-Motion Effects of Building Clusters During Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Isbiliroglu, Y. D.; Taborda, R.; Bielak, J.

    2012-12-01

    The objective of this study is to analyze the response of building clusters during earthquakes, the effect that they have on the ground motion, and how individual buildings interact with the surrounding soil and with each other. We conduct a series of large-scale, physics-based simulations that synthesize the earthquake source and the response of entire building inventories. The configuration of the clusters, defined by the total number of buildings, their number of stories, dynamic properties, and spatial distribution and separation, is varied for each simulation. In order to perform these simulations efficiently while recurrently modifying these characteristics without redoing the entire "source to building structure" simulation every time, we use the Domain Reduction Method (DRM). The DRM is a modular two-step finite-element methodology for modeling wave propagation problems in regions with localized features. It allows one to store and reuse the background motion excitation of subdomains without loss of information. Buildings are included in the second step of the DRM. Each building is represented by a block model composed of additional finite-elements in full contact with the ground. These models are adjusted to emulate the general geometric and dynamic properties of real buildings. We conduct our study in the greater Los Angeles basin, using the main shock of the 1994 Northridge earthquake for frequencies up to 5Hz. In the first step of the DRM we use a domain of 82 km x 82 km x 41 km. Then, for the second step, we use a smaller sub-domain of 5.12 km x 5.12 km x 1.28 km, with the buildings. The results suggest that site-city interaction effects are more prominent for building clusters in soft-soil areas. These effects consist in changes in the amplitude of the ground motion and dynamic response of the buildings. The simulations are done using Hercules, the parallel octree-based finite-element earthquake simulator developed by the Quake Group at Carnegie

  4. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  5. Volcanic activity before and after large tectonic earthquakes: Observations and statistical significance

    NASA Astrophysics Data System (ADS)

    Eggert, S.; Walter, T. R.

    2009-04-01

    The study of volcanic triggering and coupling to the tectonic surroundings has received special attention in recent years, using both direct field observations and historical descriptions of eruptions and earthquake activity. Repeated reports of volcano-earthquake interactions in, e.g., Europe and Japan, may imply that clustered occurrence is important in some regions. However, the regions likely to suffer clustered eruption-earthquake activity have not been systematically identified, and the processes responsible for the observed interaction are debated. We first review previous works about the correlation of volcanic eruptions and earthquakes, and describe selected local clustered events. Following an overview of previous statistical studies, we further elaborate the databases of correlated eruptions and earthquakes from a global perspective. Since we can confirm a relationship between volcanic eruptions and earthquakes on the global scale, we then perform a statistical study on the regional level, showing that time and distance between events follow a linear relationship. In the time before an earthquake, a period of volcanic silence often occurs, whereas in the time after, an increase in volcanic activity is evident. Our statistical tests imply that certain regions are especially predisposed to concurrent eruption-earthquake pairs, e.g., Japan, whereas such pairing is statistically less significant in other regions, such as Europe. Based on this study, we argue that individual and selected observations may bias the perceptible weight of coupling. Volcanoes located in the predisposed regions (e.g., Japan, Indonesia, Melanesia), however, indeed often have unexpectedly changed in association with either an imminent or a past earthquake.

  6. Seismicity trends and potential for large earthquakes in the Alaska-Aleutian region

    USGS Publications Warehouse

    Bufe, C.G.; Nishenko, S.P.; Varnes, D.J.

    1994-01-01

    The high likelihood of a gap-filling thrust earthquake in the Alaska subduction zone within this decade is indicated by two independent methods: analysis of historic earthquake recurrence data and time-to-failure analysis applied to recent decades of instrumental data. Recent (May 1993) earthquake activity in the Shumagin Islands gap is consistent with previous projections of increases in seismic release, indicating that this segment, along with the Alaska Peninsula segment, is approaching failure. Based on this pattern of accelerating seismic release, we project the occurrence of one or more M???7.3 earthquakes in the Shumagin-Alaska Peninsula region during 1994-1996. Different segments of the Alaska-Aleutian seismic zone behave differently in the decade or two preceding great earthquakes, some showing acceleration of seismic release (type "A" zones), while others show deceleration (type "D" zones). The largest Alaska-Aleutian earthquakes-in 1957, 1964, and 1965-originated in zones that exhibit type D behavior. Type A zones currently showing accelerating release are the Shumagin, Alaska Peninsula, Delarof, and Kommandorski segments. Time-to-failure analysis suggests that the large earthquakes could occur in these latter zones within the next few years. ?? 1994 Birkha??user Verlag.

  7. Observational constraints on earthquake source scaling: Understanding the limits in resolution

    USGS Publications Warehouse

    Hough, S.E.

    1996-01-01

    I examine the resolution of the type of stress drop estimates that have been used to place observational constraints on the scaling of earthquake source processes. I first show that apparent stress and Brune stress drop are equivalent to within a constant given any source spectral decay between ??1.5 and ??3 (i.e., any plausible value) and so consistent scaling is expected for the two estimates. I then discuss the resolution and scaling of Brune stress drop estimates, in the context of empirical Green's function results from recent earthquake sequences, including the 1992 Joshua Tree, California, mainshock and its aftershocks. I show that no definitive scaling of stress drop with moment is revealed over the moment range 1019-1025; within this sequence, however, there is a tendency for moderate-sized (M 4-5) events to be characterized by high stress drops. However, well-resolved results for recent M > 6 events are inconsistent with any extrapolated stress increase with moment for the aftershocks. Focusing on comer frequency estimates for smaller (M < 3.5) events, I show that resolution is extremely limited even after empirical Green's function deconvolutions. A fundamental limitation to resolution is the paucity of good signal-to-noise at frequencies above 60 Hz, a limitation that will affect nearly all surficial recordings of ground motion in California and many other regions. Thus, while the best available observational results support a constant stress drop for moderate-to large-sized events, very little robust observational evidence exists to constrain the quantities that bear most critically on our understanding of source processes: stress drop values and stress drop scaling for small events.

  8. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-08-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  9. Appearance ratio of earthquake surface rupture - About scaling low for Japanese Intraplate Earthquakes -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Irikura, K.

    2013-12-01

    A study on appearance ratio of the surface rupture is considered on using historical earthquake (ex. Takemura, 1998), also Kagawa et al (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated appearance indicates a sigmoid curve and rise sharply between Mj (Japan Meteorological Agency magnitude) =6.5 and Mj=7.2. However, these historical earthquake record between Mj = 6.5 to 7.2 are very law, therefore some scientist consider that the appearance ratio might be jumped up discontinuity between Mj = 6.5 to 7.2. In this study, we used historical intraplate earthquakes that were occurred around Japan from 1981 Nobi to 2013. Especially, after Hyogoken Nanbu Earthquake, many earthquakes around Mj 6.5 to 7.2 were occurred. The result of this study indicate that the appearance ratio increase between Mj = 6.5 to 7.2 not discontinuity but like logistic curve. Youngs et al. (2003), Petersen et al. (2011) and Moss and Ross (2011) are discussed about appearance ratio of the surface rupture using historical earthquake in the world. Their discussion are based on Mw, therefore, we cannot compare each other because we used Mj. Takemura (1990) were proposed a conversion equation that is Mw = 0.78Mj+1.08. However, nowadays Central Disaster Prevention Council in Japan (2005) derive a conversion equation that is Mw = 0.879Mj+0.536 shown in a regression line demanded by a principal component analysis The result of this study, the appearance ratio increase sharply between Mw = 6.3 to 7.0.

  10. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  11. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  12. 3-D Numerical Modeling of Rupture Sequences of Large Shallow Subduction Earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Rice, J. R.

    2003-12-01

    We study the rupture behavior of large earthquakes on a 3-D shallow subduction fault governed by a rate and state friction law, and loaded by imposed slip at rate Vpl far downdip along the thrust interface. Friction properties are temperature, and hence depth, dependent, so that sliding is stable ( a - b > 0) at depths below about 30 km. To perturb the system into a nonuniform slip mode, if such a solution exists, we introduce small along-strike variations in either the constitutive parameters a and (a - b), or the effective normal stress, or the initial conditions. Our results do show complex, nonuniform slip behavior over the thousands of simulation years. Large events of multiple magnitudes occur at various along-strike locations, with different recurrence intervals, like those of natural interplate earthquakes. In the model, a large event usually nucleates in a less well locked gap region (slipping at order of 0.1 to 1 times the plate convergence rate Vpl) between more firmly locked regions (slipping at 10-4 to 10-2 Vpl) which coincide with the rupture zones of previous large events. It then propagates in both the dip and strike directions. Along-strike propagation slows down as the rupture front encounters neighboring locked zones, whose sizes and locking extents affect further propagation. Different propagation speeds at two fronts results in an asymmetric coseismic slip distribution, as is consistent with the slip inversion results of some large subduction earthquakes [e.g., Chlieh et al., 2003]. Current grid resolution is dictated by limitations of available computers and algorithms, and forces us to use constitutive length scales that are much larger than realistic lab values; that causes nucleation sizes to be in the several kilometers (rather than several meters) range. Thus there is a tentativeness to present conclusions. But with current resolution, we observe that the heterogeneous slip at seismogenic depths (i.e., where a - b < 0 ) is sometimes

  13. Triggering of tsunamigenic aftershocks from large strike-slip earthquakes: Analysis of the November 2000 New Ireland earthquake sequence

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2005-10-01

    The November 2000 New Ireland earthquake sequence started with a Mw = 8.0 left-lateral main shock on 16 November and was followed by a series of aftershocks with primarily thrust mechanisms. The earthquake sequence was associated with a locally damaging tsunami on the islands of New Ireland and nearby New Britain, Bougainville, and Buka. Results from numerical tsunami-propagation models of the main shock and two of the largest thrust aftershocks (Mw > 7.0) indicate that the largest tsunami was caused by an aftershock located near the southeastern termination of the main shock, off the southern tip of New Ireland (Aftershock 1). Numerical modeling and tide gauge records at regional and far-field distances indicate that the main shock also generated tsunami waves. Large horizontal displacements associated with the main shock in regions of steep bathymetry accentuated tsunami generation for this event. Most of the damage on Bougainville and Buka Islands was caused by focusing and amplification of tsunami energy from a ridge wave between the source region and these islands. Modeling of changes in the Coulomb failure stress field caused by the main shock indicate that Aftershock 1 was likely triggered by static stress changes, provided the fault was on or synthetic to the New Britain interplate thrust as specified by the Harvard CMT mechanism. For other possible focal mechanisms of Aftershock 1 and the regional occurrence of thrust aftershocks in general, evidence for static stress change triggering is not as clear. Other triggering mechanisms such as changes in dynamic stress may also have been important. The 2000 New Ireland earthquake sequence provides evidence that tsunamis caused by thrust aftershocks can be triggered by large strike-slip earthquakes. Similar tectonic regimes that include offshore accommodation structures near large strike-slip faults are found in southern California, the Sea of Marmara, Turkey, along the Queen Charlotte fault in British Columbia

  14. Analysis of Luminescence Away from the Epicenter During a Large Earthquake: The Pisco, Peru Mw8 Earthquake

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Lira, J. A.

    2011-12-01

    The Mw8.0 earthquake in Pisco, Peru of August 15, 2007, produced high damage with a toll of 513 people dead, 2,291 wounded, 76,000 houses and buildings seriously damaged and 431,000 people overall affected. Co-seismic luminescence was reported by thousands of people along the central coast of Peru and especially in Lima, 150 km from the epicenter, being this the first large nighttime earthquake in about 100 years in a highly populated area. Pictures and videos of the lights are available, however those obtained so far, had little information on the timing and direction of the reported lights. Two important videos are analyzed, the first one from a fixed security camera, in order to determine differential time correlation between the timing of the lights recorded with ground acceleration registered by a three-axis accelerometer 500m away and very good results have been observed. This evidence contains important color, shape and timing information which is shown to be highly differential time correlated with the arrival of the seismic waves. Furthermore, the origin of the lights is on the top of a hilly island about 6 km off the coast of Lima where lights were reported in a written chronicle, to have been seen exactly 21 days before the Mega earthquake of October 28, 1746. This was the largest ever to happen in Peru, and produced a Tsunami that washed the port of Callao and reached up to 5km inland. The second video, from another security camera, in a different location, has been further analyzed in order to determine more exactly the direction of the lights and this new evidence will be presented. The fact that a notoriously large and well documented co-seismic luminous phenomena was video recorded more than 150 km from the epicenter during a very large earthquake, is emphasized together with historical documented evidence of pre-seismic luminous activity on the same island, during a mega earthquake of enormous proportions in Lima. Both previously mentioned videos

  15. Constraining depth range of S wave velocity decrease after large earthquakes near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Wu, Chunquan; Delorey, Andrew; Brenguier, Florent; Hadziioannou, Celine; Daub, Eric G.; Johnson, Paul

    2016-06-01

    We use noise correlation and surface wave inversion to measure the S wave velocity changes at different depths near Parkfield, California, after the 2003 San Simeon and 2004 Parkfield earthquakes. We process continuous seismic recordings from 13 stations to obtain the noise cross-correlation functions and measure the Rayleigh wave phase velocity changes over six frequency bands. We then invert the Rayleigh wave phase velocity changes using a series of sensitivity kernels to obtain the S wave velocity changes at different depths. Our results indicate that the S wave velocity decreases caused by the San Simeon earthquake are relatively small (~0.02%) and access depths of at least 2.3 km. The S wave velocity decreases caused by the Parkfield earthquake are larger (~0.2%), and access depths of at least 1.2 km. Our observations can be best explained by material damage and healing resulting mainly from the dynamic stress perturbations of the two large earthquakes.

  16. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  17. Irregular Recurrence of Large Earthquakes along the San Andreas Fault: Evidence from Trees

    NASA Astrophysics Data System (ADS)

    Jacoby, Gordon C.; Sheppard, Paul R.; Sieh, Kerry E.

    1988-07-01

    Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the ``San Juan Capistrano'' earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics.

  18. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  19. Constraining the rupture processes of the intermediate and large earthquakes using geophysical data

    NASA Astrophysics Data System (ADS)

    Ji, C.

    2009-12-01

    Detailed mapping of spatiotemporal slip distributions of large earthquakes is one of the principal goals of seismology. Since the finite-fault inversion method was first introduced during the studies of the 1979 Imperial Valley, California, earthquake, it becomes the sharpest tool in the study of earthquake seismology. Various new developments in terms of source representations, inverse methods, objective functions, have been conducted even since to improve its resolution and robustness. The geophysical datasets other than seismic data, such as GPS, Interferometric and geological surface offsets, are also been included to extend the data coverage spatially. With recent developments in global broadband seismic instrumentations, it has become possible to routinely study the earthquake slip history in realtime, and proceed to predict the local damage. In this poster, I will summarize the our recent developments in the procedure of the realtime finite fault inversions, in terms of data coverage, fault geometry, earth structures, and error analysis.

  20. Nonlinear ionospheric responses to large-amplitude infrasonic-acoustic waves generated by undersea earthquakes

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-02-01

    Numerical models of ionospheric coupling with the neutral atmosphere are used to investigate perturbations of plasma density, vertically integrated total electron content (TEC), neutral velocity, and neutral temperature associated with large-amplitude acoustic waves generated by the initial ocean surface displacements from strong undersea earthquakes. A simplified source model for the 2011 Tohoku earthquake is constructed from estimates of initial ocean surface responses to approximate the vertical motions over realistic spatial and temporal scales. Resulting TEC perturbations from modeling case studies appear consistent with observational data, reproducing pronounced TEC depletions which are shown to be a consequence of the impacts of nonlinear, dissipating acoustic waves. Thermospheric acoustic compressional velocities are ˜±250-300 m/s, superposed with downward flows of similar amplitudes, and temperature perturbations are ˜300 K, while the dominant wave periodicity in the thermosphere is ˜3-4 min. Results capture acoustic wave processes including reflection, onset of resonance, and nonlinear steepening and dissipation—ultimately leading to the formation of ionospheric TEC depletions "holes"—that are consistent with reported observations. Three additional simulations illustrate the dependence of atmospheric acoustic wave and subsequent ionospheric responses on the surface displacement amplitude, which is varied from the Tohoku case study by factors of 1/100, 1/10, and 2. Collectively, results suggest that TEC depletions may only accompany very-large amplitude thermospheric acoustic waves necessary to induce a nonlinear response, here with saturated compressional velocities ˜200-250 m/s generated by sea surface displacements exceeding ˜1 m occurring over a 3 min time period.

  1. Large-scale dynamics of magnetic helicity

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; Dallas, Vassilios

    2016-11-01

    In this paper we investigate the dynamics of magnetic helicity in magnetohydrodynamic (MHD) turbulent flows focusing at scales larger than the forcing scale. Our results show a nonlocal inverse cascade of magnetic helicity, which occurs directly from the forcing scale into the largest scales of the magnetic field. We also observe that no magnetic helicity and no energy is transferred to an intermediate range of scales sufficiently smaller than the container size and larger than the forcing scale. Thus, the statistical properties of this range of scales, which increases with scale separation, is shown to be described to a large extent by the zero flux solutions of the absolute statistical equilibrium theory exhibited by the truncated ideal MHD equations.

  2. Seismic sequences, swarms, and large earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Amato, Alessandro; Piana Agostinetti, Nicola; Selvaggi, Giulio; Mele, Franco

    2016-04-01

    In recent years, particularly after the L'Aquila 2009 earthquake and the 2012 Emilia sequence, the issue of earthquake predictability has been at the center of the discussion in Italy, not only within the scientific community but also in the courtrooms and in the media. Among the noxious effects of the L'Aquila trial there was an increase of scaremongering and false alerts during earthquake sequences and swarms, culminated in a groundless one-night evacuation in northern Tuscany in 2013. We have analyzed the Italian seismicity of the last decades in order to determine the rate of seismic sequences and investigate some of their characters, including frequencies, min/max durations, maximum magnitudes, main shock timing, etc. Selecting only sequences with an equivalent magnitude of 3.5 or above, we find an average of 30 sequences/year. Although there is an extreme variability in the examined parameters, we could set some boundaries, useful to obtain some quantitative estimates of the ongoing activity. In addition, the historical catalogue is rich of complex sequences in which one main shock is followed, seconds, days or months later, by another event with similar or higher magnitude We also analysed the Italian CPT11 catalogue (Rovida et al., 2011) between 1950 and 2006 to highlight the foreshock-mainshock event couples that were suggested in previous studies to exist (e.g. six couples, Marzocchi and Zhuang, 2011). Moreover, to investigate the probability of having random foreshock-mainshock couples over the investigated period, we produced 1000 synthetic catalogues, randomly distributing in time the events occured in such period. Preliminary results indicate that: (1) all but one of the the so-called foreshock-mainshock pairs found in Marzocchi and Zhuang (2011) fall inside previously well-known and studied seismic sequences (Belice, Friuli and Umbria-Marche), meaning that suggested foreshocks are also aftershocks; and (2) due to the high-rate of the italian

  3. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  4. Radar Interferometric Applications for a Better Understanding of the Distribution, Controlling Factors, and Precursors of Large Earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Emil, M.; Sultan, M.; Fawzy, D. E.; Ahmed, M. E.; Chouinard, K.

    2012-12-01

    We are analyzing ERS-1 and ERS-2 and ENVISAT data to measure the spatial and temporal variations in three tectonically active areas near Izmit, Duzce and Van provinces in Turkey. We are using ERS-1 and ERS-2 data sets, which provide a longer time period of coverage (1992 to 2001). In addition, we will extend this forward to the present with ENVISAT radar data. The proposed activities can potentially provide predictive tools that can identify precursors to earthquakes and hence develop procedures to identify areas at risk. We are using radar interferometric techniques that have the ability of detecting deformation on the order of millimeters in scale over relatively large areas. We are applying the persistent scatterer and the small baseline subset (SBAS) techniques. A four fold exercise is being conducted: (1) extraction of land deformation rates and patterns from radar interferometry, (2) comparison and calibration of extracted rates to those extracted from existing geodetic ground stations, (3) identification of the natural factors (e.g., displacement along one or more faults) that are largely responsible for the observed deformation patterns, (4) utilizing the extracted deformation rates and/or patterns to identify areas prone to earthquake development in the near future, and (5) utilizing the extracted deformation rates or patterns to identify the areal extent of the domains affected by the earthquakes and the magnitude of the deformation following the earthquakes. The conditions in Turkey are typical of many of the world's areas that are witnessing continent to continent collisions. To date, applications similar to those advocated here for the assessment of ongoing land deformation in such areas and for identifying and characterizing land deformation as potential precursors to earthquakes have not been fully explored. Thus, the broader impact of this work lies in a successful demonstration of the advocated procedures in the study area which will invite similar

  5. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer

  6. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  7. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  8. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  9. Typical Scenario of Preparation, Implementation, and Aftershock Sequence of a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, Mikhail

    2016-04-01

    We have tried here to construct and examine the typical scenario of a large earthquake occurrence. The Harvard seismic moment GCMT catalog was used to construct the large earthquake generalized space-time vicinity (LEGV) and to investigate the seismicity behavior in LEGV. LEGV was composed of earthquakes falling into the zone of influence of any of the considerable number (100, 300, or 1,000) of largest earthquakes. The LEGV construction is aimed to enlarge the available statistics, diminish a strong random component, and to reveal in result the typical features of pre- and post-shock seismic activity in more detail. In result of the LEGV construction the character of fore- and aftershock cascades was examined in more detail than it was possible without of the use of the LEGV approach. It was shown also that the mean earthquake magnitude tends to increase, and the b-values, mean mb/mw ratios, apparent stress values, and mean depth tend to decrease. Amplitudes of all these anomalies increase with an approach to a moment of the generalized large earthquake (GLE) as a logarithm of time interval from GLE occurrence. Most of the discussed anomalies agree well with a common scenario of development of instability. Besides of such precursors of common character, one earthquake-specific precursor was found. The revealed decrease of mean earthquake depth during large earthquake preparation testifies probably for the deep fluid involvement in the process. The revealed in LEGV typical features of development of shear instability agree well with results obtained in laboratory acoustic emission (AE) study. Majority of the revealed anomalies appear to have a secondary character and are connected mainly with an increase in a mean earthquake magnitude in LEGV. The mean magnitude increase was shown to be connected mainly with a decrease of a portion of moderate size events (Mw 5.0 - 5.5) in a closer GLE vicinity. We believe that this deficit of moderate size events hardly can be

  10. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  11. W phase source inversion using high-rate regional GPS data for large earthquakes

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F.; Melgar, D.; Benavente, R.; Geng, J.; Barrientos, S.; Campos, J.

    2016-04-01

    W phase moment tensor inversion has proven to be a reliable method for rapid characterization of large earthquakes. For global purposes it is used at the United States Geological Survey, Pacific Tsunami Warning Center, and Institut de Physique du Globe de Strasbourg. These implementations provide moment tensors within 30-60 min after the origin time of moderate and large worldwide earthquakes. Currently, the method relies on broadband seismometers, which clip in the near field. To ameliorate this, we extend the algorithm to regional records from high-rate GPS data and retrospectively apply it to six large earthquakes that occurred in the past 5 years in areas with relatively dense station coverage. These events show that the solutions could potentially be available 4-5 min from origin time. Continuously improving GPS station availability and real-time positioning solutions will provide significant enhancements to the algorithm.

  12. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  13. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  14. Large Subduction Earthquakes along the fossil MOHO in Alpine Corsica: what was the role of fluids?

    NASA Astrophysics Data System (ADS)

    Andersen, Torgeir B.; Deseta, Natalie; Silkoset, Petter; Austrheim, Håkon; Ashwal, Lewis D.

    2014-05-01

    Intermediate depth subduction earthquakes abruptly release vast amounts of energy to crust and mantle lithosphere. The products of such drastic deformation events can only rarely be observed in the field because they are mostly permanently lost by the subduction. We present new observations of deformation products formed by large fossil subduction earthquakes in Alpine Corsica. These are formed by a few very large and numerous small intermediate-depth earthquakes along the exhumed palaeo-Moho in the Alpine Liguro-Piemontese basin, which together with the 'schistes-lustrés complex' experienced blueschist- to lawsonite-eclogite facies metamorphism during the Alpine subduction. The abrupt release of energy resulted in localized shear heating that completely melted both gabbro and peridotite along the Moho. The large volumes of melts that were generated by at most a few very large earthquakes along the Moho can be studied in the fault- and injection vein breccia complex that is preserved in a segment along the Moho fault. The energy required for wholesale melting of a large volume of peridotite pr. m2 fault plane, combined with estimates of stress-drops show that a few large earthquakes took place along the Moho of the subducting plate. Since these fault rocks represent intra-plate seismicity we suggest they formed along the lower seismogenic zone by analogy with present-day subduction. As demonstrated in previous work (detailed petrography and EBSD) by our research team, there is no evidence for prograde dehydration reactions leading up to the co-seismic slip events. Instead we show that local crystal-plastic deformation in olivine and shear heating was more significant for the run-away co-seismic failure than a solid-state dehydration reaction weakening. We therefore disregard dehydration embrittlement as a weakening mechanism for these events, and suggest that shear heating may be the most important weakening mechanism for intermediate depth earthquakes.

  15. Earthquakes

    EPA Pesticide Factsheets

    Information on this page will help you understand environmental dangers related to earthquakes, what you can do to prepare and recover. It will also help you recognize possible environmental hazards and learn what you can do to protect you and your family

  16. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  17. Recurrent large earthquakes in a fault region: What can be inferred from small and intermediate events?

    NASA Astrophysics Data System (ADS)

    Zoeller, G.; Hainzl, S.; Holschneider, M.

    2008-12-01

    We present a renewal model for the recurrence of large earthquakes in a fault zone consisting of a major fault and surrounding smaller faults with Gutenberg-Richter type seismicity represented by seismic moment release drawn from a truncated power-law distribution. The recurrence times of characteristic earthquakes for the major fault are explored. It is continuously loaded (plate motion) and undergoes positive and negative fluctuations due to adjacent smaller faults, with a large number Neq of such changes between two major earthquakes. Since the distribution has a finite variance, in the limit Neq→∞ the central limit theorem implies that the recurrence times follow a Brownian passage-time (BPT) distribution. This allows to calculate individual recurrence time distributions for specific fault zones without tuning free parameters: the mean recurrence time can be estimated from geological or paleoseismic data, and the standard deviation is determined from the frequency-size distribution, namely the Richter b value, of an earthquake catalog. The approach is demonstrated for the Parkfield segment of the San Andreas fault in California as well as for a long simulation of a numerical fault model. Assuming power-law distributed earthquake magnitudes up to the size of the recurrent Parkfield event (M=6), we find a coefficient of variation that is higher than the value obtained by a direct fit of the BPT distribution to seven large earthquakes. Finally we show that uncertainties in the earthquake magnitudes, e.g. from magnitude grouping, can cause a significant bias in the results. A method to correct for the bias as well as a Baysian technique to account for evolving data are provided.

  18. Recurrent large earthquakes in a fault region: What can be inferred from small and intermediate events?

    NASA Astrophysics Data System (ADS)

    Zöller, G.; Hainzl, S.; Holschneider, M.

    2009-04-01

    We present a renewal model for the recurrence of large earthquakes in a fault zone consisting of a major fault and surrounding smaller faults with Gutenberg-Richter type seismicity represented by seismic moment release drawn from a truncated power-law distribution. The recurrence times of characteristic earthquakes for the major fault are explored. It is continuously loaded (plate motion) and undergoes positive and negative fluctuations due to adjacent smaller faults, with a large number Neq of such changes between two major earthquakes. Since the distribution has a finite variance, in the limit Neq →ž the central limit theorem implies that the recurrence times follow a Brownian passage-time (BPT) distribution. This allows to calculate individual recurrence time distributions for specific fault zones without tuning free parameters: the mean recurrence time can be estimated from geological or paleoseismic data, and the standard deviation is determined from the frequency-size distribution, namely the Richter b value, of an earthquake catalog. The approach is demonstrated for the Parkfield segment of the San Andreas fault in California as well as for a long simulation of a numerical fault model. Assuming power-law distributed earthquake magnitudes up to the size of the recurrent Parkfield event (M = 6), we find a coefficient of variation that is higher than the value obtained by a direct fit of the BPT distribution to seven large earthquakes. Finally we show that uncertainties in the earthquake magnitudes, e.g. from magnitude grouping, can cause a significant bias in the results. A method to correct for the bias as well as a Baysian technique to account for evolving data are provided.

  19. Unexpected geological impacts associated with large earthquakes and tsunamis in northern Honshu, Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Goff, J. R.

    2013-12-01

    Palaeoseismic research in areas adjacent to subduction zones has traditionally been concerned with identifying geological or geomorphological features associated with the immediate effects of past earthquakes, such as tsunamis, uplift or subsidence, with the aim of estimating earthquake magnitude and/or frequency. However, there are also other features in the landscape that can offer some insights into the past earthquake and tsunami history of a region. The study of coastal dune systems as palaeoseismic indicators is still in its infancy, but can provide useful evidence of past large earthquakes and by association, the tsunamis they generated. On a catchment-wide basis, past research has linked a sequence of environmental changes such as forest disturbance, landslides, river aggradation and rapid coastal dune building as geomorphological after-effects (in addition to tsunamis) of a large earthquake. In this model large pulses of sediment created by co-seismic landsliding in the upper catchment are moved rapidly to the coast where they leave a clear signature in the landscape. Coarser sediments form an aggradation surfaces and finer sediments form a new coastal dune or beach ridge. Coastal dune ridge systems are not exclusively associated with seismically active areas, but where they do occur in such places their potential use as palaeoseismic indicators is often ignored. Data are presented first of all about the beach ridges of the Sendai Plain where investigations have been carried out following the 2011 Tohoku-oki earthquake and tsunami. A wider regional picture of both palaeoseismicity, palaeotsunamis and beach ridge formation is then discussed. Existing data indicate a strong correlation between past earthquakes and the timing of beach ridge formation over the past 5000 years, however it seems likely that there is a far more detailed record still preserved in Japan's beach ridges and suggestions are offered on the directions for future research in this area.

  20. Observations of large earthquakes in the Mexican subduction zone over 110 years

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, Vala; Krishna Singh, Shri; Martínez-Peláez, Liliana; Garza-Girón, Ricardo; Lund, Björn; Ji, Chen

    2016-04-01

    Fault slip during an earthquake is observed to be highly heterogeneous, with areas of large slip interspersed with areas of smaller or even no slip. The cause of the heterogeneity is debated. One hypothesis is that the frictional properties on the fault are heterogeneous. The parts of the rupture surface that have large slip during earthquakes are coupled more strongly, whereas the areas in between and around creep continuously or episodically. The continuously or episodically creeping areas can partly release strain energy through aseismic slip during the interseismic period, resulting in relatively lower prestress than on the coupled areas. This would lead to subsequent earthquakes having large slip in the same place, or persistent asperities. A second hypothesis is that in the absence of creeping sections, the prestress is governed mainly by the accumulative stress change associated with previous earthquakes. Assuming homogeneous frictional properties on the fault, a larger prestress results in larger slip, i.e. the next earthquake may have large slip where there was little or no slip in the previous earthquake, which translates to non-persistent asperities. The study of earthquake cycles are hampered by short time period for which high quality, broadband seismological and accelerographic records, needed for detailed studies of slip distributions, are available. The earthquake cycle in the Mexican subduction zone is relatively short, with about 30 years between large events in many places. We are therefore entering a period for which we have good records for two subsequent events occurring in the same segment of the subduction zone. In this study we compare seismograms recorded either at the Wiechert seismograph or on a modern broadband seismometer located in Uppsala, Sweden for subsequent earthquakes in the Mexican subduction zone rupturing the same patch. The Wiechert seismograph is unique in the sense that it recorded continuously for more than 80 years

  1. Large-scale cortical networks and cognition.

    PubMed

    Bressler, S L

    1995-03-01

    The well-known parcellation of the mammalian cerebral cortex into a large number of functionally distinct cytoarchitectonic areas presents a problem for understanding the complex cortical integrative functions that underlie cognition. How do cortical areas having unique individual functional properties cooperate to accomplish these complex operations? Do neurons distributed throughout the cerebral cortex act together in large-scale functional assemblages? This review examines the substantial body of evidence supporting the view that complex integrative functions are carried out by large-scale networks of cortical areas. Pathway tracing studies in non-human primates have revealed widely distributed networks of interconnected cortical areas, providing an anatomical substrate for large-scale parallel processing of information in the cerebral cortex. Functional coactivation of multiple cortical areas has been demonstrated by neurophysiological studies in non-human primates and several different cognitive functions have been shown to depend on multiple distributed areas by human neuropsychological studies. Electrophysiological studies on interareal synchronization have provided evidence that active neurons in different cortical areas may become not only coactive, but also functionally interdependent. The computational advantages of synchronization between cortical areas in large-scale networks have been elucidated by studies using artificial neural network models. Recent observations of time-varying multi-areal cortical synchronization suggest that the functional topology of a large-scale cortical network is dynamically reorganized during visuomotor behavior.

  2. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  3. Reactivity of seismicity rate to static Coulomb stress changes of two consecutive large earthquakes in the central Philippines

    NASA Astrophysics Data System (ADS)

    Dianala, J. D. B.; Aurelio, M.; Rimando, J. M.; Taguibao, K.

    2015-12-01

    In a region with little understanding in terms of active faults and seismicity, two large-magnitude reverse-fault related earthquakes occurred within 100km of each other in separate islands of the Central Philippines—the Mw=6.7 February 2012 Negros earthquake and the Mw=7.2 October 2013 Bohol earthquake. Based on source faults that were defined using onshore, offshore seismic reflection, and seismicity data, stress transfer models for both earthquakes were calculated using the software Coulomb. Coulomb stress triggering between the two main shocks is unlikely as the stress change caused by Negros earthquake on the Bohol fault was -0.03 bars. Correlating the stress changes on optimally-oriented reverse faults with seismicity rate changes shows that areas that decreased both in static stress and seismicity rate after the first earthquake were then areas with increased static stress and increased seismicity rate caused by the second earthquake. These areas with now increased stress, especially those with seismicity showing reactivity to static stress changes caused by the two earthquakes, indicate the presence of active structures in the island of Cebu. Comparing the history of instrumentally recorded seismicity and the recent large earthquakes of Negros and Bohol, these structures in Cebu have the potential to generate large earthquakes. Given that the Philippines' second largest metropolitan area (Metro Cebu) is in close proximity, detailed analysis of the earthquake potential and seismic hazards in these areas should be undertaken.

  4. Large fault slip peaking at trench in the 2011 Tohoku-oki earthquake.

    PubMed

    Sun, Tianhaozhe; Wang, Kelin; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2017-01-11

    During the 2011 magnitude 9 Tohoku-oki earthquake, very large slip occurred on the shallowest part of the subduction megathrust. Quantitative information on the shallow slip is of critical importance to distinguishing between different rupture mechanics and understanding the generation of the ensuing devastating tsunami. However, the magnitude and distribution of the shallow slip are essentially unknown due primarily to the lack of near-trench constraints, as demonstrated by a compilation of 45 rupture models derived from a large range of data sets. To quantify the shallow slip, here we model high-resolution bathymetry differences before and after the earthquake across the trench axis. The slip is determined to be about 62 m over the most near-trench 40 km of the fault with a gentle increase towards the trench. This slip distribution indicates that dramatic net weakening or strengthening of the shallow fault did not occur during the Tohoku-oki earthquake.

  5. Large fault slip peaking at trench in the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Tianhaozhe; Wang, Kelin; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2017-01-01

    During the 2011 magnitude 9 Tohoku-oki earthquake, very large slip occurred on the shallowest part of the subduction megathrust. Quantitative information on the shallow slip is of critical importance to distinguishing between different rupture mechanics and understanding the generation of the ensuing devastating tsunami. However, the magnitude and distribution of the shallow slip are essentially unknown due primarily to the lack of near-trench constraints, as demonstrated by a compilation of 45 rupture models derived from a large range of data sets. To quantify the shallow slip, here we model high-resolution bathymetry differences before and after the earthquake across the trench axis. The slip is determined to be about 62 m over the most near-trench 40 km of the fault with a gentle increase towards the trench. This slip distribution indicates that dramatic net weakening or strengthening of the shallow fault did not occur during the Tohoku-oki earthquake.

  6. Precursory measure of interoccurrence time associated with large earthquakes in the Burridge-Knopoff model

    SciTech Connect

    Hasumi, Tomohiro

    2008-11-13

    We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

  7. Large fault slip peaking at trench in the 2011 Tohoku-oki earthquake

    PubMed Central

    Sun, Tianhaozhe; Wang, Kelin; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2017-01-01

    During the 2011 magnitude 9 Tohoku-oki earthquake, very large slip occurred on the shallowest part of the subduction megathrust. Quantitative information on the shallow slip is of critical importance to distinguishing between different rupture mechanics and understanding the generation of the ensuing devastating tsunami. However, the magnitude and distribution of the shallow slip are essentially unknown due primarily to the lack of near-trench constraints, as demonstrated by a compilation of 45 rupture models derived from a large range of data sets. To quantify the shallow slip, here we model high-resolution bathymetry differences before and after the earthquake across the trench axis. The slip is determined to be about 62 m over the most near-trench 40 km of the fault with a gentle increase towards the trench. This slip distribution indicates that dramatic net weakening or strengthening of the shallow fault did not occur during the Tohoku-oki earthquake. PMID:28074829

  8. Does hydrologic circulation mask frictional heat on faults after large earthquakes?

    NASA Astrophysics Data System (ADS)

    Fulton, Patrick M.; Harris, Robert N.; Saffer, Demian M.; Brodsky, Emily E.

    2010-09-01

    Knowledge of frictional resistance along faults is important for understanding the mechanics of earthquakes and faulting. The clearest in situ measure of fault friction potentially comes from temperature measurements in boreholes crossing fault zones within a few years of rupture. However, large temperature signals from frictional heating on faults have not been observed. Unambiguously interpreting the coseismic frictional resistance from small thermal perturbations observed in borehole temperature profiles requires assessing the impact of other potentially confounding thermal processes. We address several issues associated with quantifying the temperature signal of frictional heating including transient fluid flow associated with the earthquake, thermal disturbance caused by borehole drilling, and heterogeneous thermal physical rock properties. Transient fluid flow is investigated using a two-dimensional coupled fluid flow and heat transport model to evaluate the temperature field following an earthquake. Simulations for a range of realistic permeability, frictional heating, and pore pressure scenarios show that high permeabilities (>10-14 m2) are necessary for significant advection within the several years after an earthquake and suggest that transient fluid flow is unlikely to mask frictional heat anomalies. We illustrate how disturbances from circulating fluids during drilling diffuse quickly leaving a robust signature of frictional heating. Finally, we discuss the utility of repeated borehole temperature profiles for discriminating between different interpretations of thermal perturbations. Our results suggest that temperature anomalies from even low friction should be detectable at depths >1 km 1 to 2 years after a large earthquake and that interpretations of low friction from existing data are likely robust.

  9. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  10. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    USGS Publications Warehouse

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  11. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  12. Understanding Local-Scale Fault Interaction Through Seismological Observation and Numerical Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Kroll, Kayla Ann

    A number of outstanding questions in earthquake physics revolve around under- standing the relationships among local-scale stress changes, fault interactions (i.e. how stresses are transferred) and earthquake response to stress changes. Here, I employ seismological observations and numerical simulation tools to investigate how stress changes from a mainshock, or by fluid injection, can either aid or hinder further earthquake activity. Chapter 2.2 couples Coulomb stress change models with rate- and state-dependent friction to model the time-dependent evolution of complex aftershock activity following the 2010 El Mayor-Cucapah earthquake. Part III focuses on numerical simulations of earthquake sequences with the multi-cycle earthquake simulator, RSQSim. I use RSQSim in two applications; 1) multi-cycle simulation of processes that controlling earthquake rupture along parallel, but discontinuous, offset faults (Chapter 3), and 2) investigation of relationships between injection of fluids into the subsurface and the characteristics of the resulting induced seismicity (Chapter 4). Results presented in Chapter 2.2 demonstrate that both increases and decreases in seismicity rate are correlated with regions of positive and negative Coulomb stress change, respectively. We show that the stress shadow effect can be delayed in time when two faulting populations are active within the same region. In Chapter 3, we show that the pre-rupture stress distribution on faults governs the location of rupture re-nucleation on the receiver fault strand. Additionally, through analysis of long-term multi-cycle simulations, we find that ruptures can jump larger offsets more frequently when source and receiver fault ruptures are delayed in time. Results presented in Chapter 4 demonstrate that induced earthquake sequences are sensitive to the constitutive parameters, a and b, of the rate-state formulation. Finally, we find the rate of induced earthquakes decreases for increasing values of

  13. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  14. Seismic Safety Margins Research Program. Regional relationships among earthquake magnitude scales

    SciTech Connect

    Chung, D. H.; Bernreuter, D. L.

    1980-05-01

    The seismic body-wave magnitude m{sub b} of an earthquake is strongly affected by regional variations in the Q structure, composition, and physical state within the earth. Therefore, because of differences in attenuation of P-waves between the western and eastern United States, a problem arises when comparing m{sub b}'s for the two regions. A regional m/sub b/ magnitude bias exists which, depending on where the earthquake occurs and where the P-waves are recorded, can lead to magnitude errors as large as one-third unit. There is also a significant difference between m{sub b} and M{sub L} values for earthquakes in the western United States. An empirical link between the m{sub b} of an eastern US earthquake and the M{sub L} of an equivalent western earthquake is given by M{sub L} = 0.57 + 0.92(m{sub b}){sub East}. This result is important when comparing ground motion between the two regions and for choosing a set of real western US earthquake records to represent eastern earthquakes. 48 refs., 5 figs., 2 tabs.

  15. Geological observations on large earthquakes along the Himalayan frontal fault near Kathmandu, Nepal

    NASA Astrophysics Data System (ADS)

    Wesnousky, Steven G.; Kumahara, Yasuhiro; Chamlagain, Deepak; Pierce, Ian K.; Karki, Alina; Gautam, Dipendra

    2017-01-01

    The 2015 Gorkha earthquake produced displacement on the lower half of a shallow decollement that extends 100 km south, and upward from beneath the High Himalaya and Kathmandu to where it breaks the surface to form the trace of the Himalayan Frontal Thrust (HFT), leaving unruptured the shallowest ∼50 km of the decollement. To address the potential of future earthquakes along this section of the HFT, we examine structural, stratigraphic, and radiocarbon relationships in exposures created by emplacement of trenches across the HFT where it has produced scarps in young alluvium at the mouths of major rivers at Tribeni and Bagmati. The Bagmati site is located south of Kathmandu and directly up dip from the Gorkha rupture, whereas the Tribeni site is located ∼200 km to the west and outside the up dip projection of the Gorkha earthquake rupture plane. The most recent rupture at Tribeni occurred 1221-1262 AD to produce a scarp of ∼7 m vertical separation. Vertical separation across the scarp at Bagmati registers ∼10 m, possibly greater, and formed between 1031-1321 AD. The temporal constraints and large displacements allow the interpretation that the two sites separated by ∼200 km each ruptured simultaneously, possibly during 1255 AD, the year of a historically reported earthquake that produced damage in Kathmandu. In light of geodetic data that show ∼20 mm/yr of crustal shortening is occurring across the Himalayan front, the sum of observations is interpreted to suggest that the HFT extending from Tribeni to Bagmati may rupture simultaneously, that the next great earthquake near Kathmandu may rupture an area significantly greater than the section of HFT up dip from the Gorkha earthquake, and that it is prudent to consider that the HFT near Kathmandu is well along in a strain accumulation cycle prior to a great thrust earthquake, most likely much greater than occurred in 2015.

  16. Complex Nucleation Process of Large North Chile Earthquakes, Implications for Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Meneses, G.; Sobiesiak, M.; Madariaga, R. I.

    2014-12-01

    We studied the nucleation process of Northern Chile events that included the large earthquakes of Tocopilla 2007 Mw 7.8 and Iquique 2014 Mw 8.1, as well as the background seismicity recorded from 2011 to 2013 by the ILN temporary network and the IPOC and CSN permanent networks. We built our catalogue of 393 events starting from the CSN catalogue, which has a completeness of magnitude Mw > 3.0 in Northern Chile. We re-located and computed moment magnitude for each event. We also computed Early Warning (EW) parameters - Pd, Pv, τc and IV2 - for each event including 13 earthquakes of Mw>6.0 that occurred between 2007-2012. We also included part of the seismicity from March-April 2014 period. We find that Pd, Pv and IV2 are good estimators of magnitude for interplate thrust and intraplate intermediate depth events with Mw between 4.0 and 6.0. However, the larger magnitude events show a saturation of the EW parameters. The Tocopilla 2007 and Iquique 2014 earthquake sequences were studied in detail. Almost all events with Mw>6.0 present precursory signals so that the largest amplitudes occur several seconds after the first P wave arrival. The recent Mw 8.1 Iquique 2014 earthquake was preceded by low amplitude P waves for 20 s before the main asperity was broken. The magnitude estimation can improve if we consider longer P wave windows in the estimation of EW parameters. There was, however, a practical limit during the Iquique earthquake because the first S waves arrived before the arrival of the P waves from the main rupture. The 4 s P-wave Pd parameter estimated Mw 7.1 for the Mw 8.1 Iquique 2014 earthquake and Mw 7.5 for the Mw 7.8 Tocopilla 2007 earthquake.

  17. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  18. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  19. Systematic detection of seismic activity before recent large earthquakes in China

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Wang, B.; Ruan, X.; Meng, X.; Hongwei, T.; Long, F.; Su, J.

    2014-12-01

    Sometimes large shallow earthquakes are preceded by increased local seismic activity, known as "foreshocks". However, the exact relationship between foreshocks and mainshock nucleation is still in debate. Several studies have found accelerating or migrating foreshock activity right before recent large earthquakes along major plate boundary faults, indicating that foreshocks are likely driven by slow-slip events. However, it is still not
clear whether similar features could be observed for earthquakes that occur away from plate-boundary regions.
Here we conduct a systematic detection of possible foreshock activity around the times of 6 recent large earthquakes in China.
The candidate events include the 2008 Ms7.3 Yutian, Ms8.0 Wenchuan, the 2010 Ms7.0 Yushu,
the 2013 Ms7.0 Lushan, the 2014 Ms7.3 Yutian, and the 2014 Ms6.5 Zhaotong earthquakes. Among them, the 2010 Yushu and 2014 Yutian mainshocks had clear evidence of M4-5 immediate foreshocks listed in regional earthquake catalogs, while the rest
did not. In each case, we use waveforms of local earthquakes listed in the catalog as templates and scan through continuous waveforms recorded by both permanent and temporary seismic stations around the epicentral region of each mainshock. Our waveform matching method can detect at least a few times more events than listed in the catalog. Our initial results show a wide range of behaviors. For the 2010 Yushu and 2014 Yutian cases, the M4-5 foreshocks were followed by many smaller-size events that could be considered as their aftershocks. For the Wenchuan case, we did not observe any obvious foreshock in the immediate vicinity of the epicenter. However, we found one swarm sequence that shows systematic migration a few months before the Wenchuan mainshock. Our next step is to relocate these newly detected events to search for spatio-temporal evolutions before each mainshock, as well
as performing Epidemic Type Aftershock Sequence (ETAS) modeling to examine

  20. Ground motions characterized by a multi-scale heterogeneous earthquake model

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ide, Satoshi

    2014-12-01

    We have carried out numerical simulations of seismic ground motion radiating from a mega-earthquake whose rupture process is governed by a multi-scale heterogeneous distribution of fracture energy. The observed complexity of the Mw 9.0 2011 Tohoku-Oki earthquake can be explained by such heterogeneities with fractal patches (size and number), even without introducing any heterogeneity in the stress state. In our model, scale dependency in fracture energy (i.e., the slip-weakening distance D c) on patch size is essential. Our results indicate that wave radiation is generally governed by the largest patch at each moment and that the contribution from small patches is minor. We then conducted parametric studies on the frictional parameters of peak ( τ p) and residual ( τ r) friction to produce the case where the effect of the small patches is evident during the progress of the main rupture. We found that heterogeneity in τ r has a greater influence on the ground motions than does heterogeneity in τ p. As such, local heterogeneity in the static stress drop (Δ τ) influences the rupture process more than that in the stress excess (Δ τ excess). The effect of small patches is particularly evident when these are almost geometrically isolated and not simultaneously involved in the rupture of larger patches. In other cases, the wave radiation from small patches is probably hidden by the major contributions from large patches. Small patches may play a role in strong motion generation areas with low τ r (high Δ τ), particularly during slow average rupture propagation. This effect can be identified from the differences in the spatial distributions of peak ground velocities for different frequency ranges.

  1. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  2. Unusually large earthquakes inferred from tsunami deposits along the Kuril trench

    USGS Publications Warehouse

    Nanayama, F.; Satake, K.; Furukawa, R.; Shimokawa, K.; Atwater, B.F.; Shigeno, K.; Yamaki, S.

    2003-01-01

    The Pacific plate converges with northeastern Eurasia at a rate of 8-9 m per century along the Kamchatka, Kuril and Japan trenches. Along the southern Kuril trench, which faces the Japanese island of Hokkaido, this fast subduction has recurrently generated earthquakes with magnitudes of up to ???8 over the past two centuries. These historical events, on rupture segments 100-200 km long, have been considered characteristic of Hokkaido's plate-boundary earthquakes. But here we use deposits of prehistoric tsunamis to infer the infrequent occurrence of larger earthquakes generated from longer ruptures. Many of these tsunami deposits form sheets of sand that extend kilometres inland from the deposits of historical tsunamis. Stratigraphic series of extensive sand sheets, intercalated with dated volcanic-ash layers, show that such unusually large tsunamis occurred about every 500 years on average over the past 2,000-7,000 years, most recently ???350 years ago. Numerical simulations of these tsunamis are best explained by earthquakes that individually rupture multiple segments along the southern Kuril trench. We infer that such multi-segment earthquakes persistently recur among a larger number of single-segment events.

  3. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  4. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  5. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  6. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  7. Relationships Within the Precurors Before Large Earthquakes: Theory, Observations and Data Chosen

    NASA Astrophysics Data System (ADS)

    Mingyu, J.

    2015-12-01

    The Non-Critical Precursory Accelerating Seismicity Theory has been partly tested (with the method of RTL) about the same spatiotemporal window of both accelerating seismicity and quiescence, which may relate the two of the principal patterns that can precede large earthquakes. The Accelerating Moment/Strain Releasing model could be seen as the loading process between the reduced seismicity after the last huge earthquakes around the epicenter and the next rupture. So the AMR model depends on the occurrence of the quiescence. Here, we develop an approach based on the concept of stress accumulation to unify and categorize all claimed seismic precursors in a same physical framework, including the quiescence, the AMR/ASR model and the short-term activation. It shows that different precursory paths are possible before large earthquakes and can be described by a logic tree with combined criteria at a given stress state and of precursory silent slip on the fault and within the faults system. Theoretical results are then compared to the time series observed prior to the earthquake catalog from 1980 to 2012, Italy. In the initial result, the case of the 2009 Mw = 6.3 L'Aquila earthquake, Italy, the observed precursory path is coupling of quiescence and accelerating seismic release, followed by activation. What's more, the comparison between ETAS and Stress Accumulation Model shows that precursors are statistic significant when microseismicity is considered, which holds important information on the stress loading state of the crust surrounding active faults. Based on the corrected Akaike Information Criterion (AICc), we found that quiescence and ASR signals are significant when events below a magnitude of 2.2 are included and that short-term activation is significant when events below 3.3 are included. These results provide guidelines for future research on earthquake regional risk assessment and some kind of predictability.

  8. Evidence for earthquake triggering of large landslides in coastal Oregon, USA

    USGS Publications Warehouse

    Schulz, W.H.; Galloway, S.L.; Higgins, J.D.

    2012-01-01

    Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement rates, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great earthquakes every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or earthquake-induced ground motion using field observations and modeling of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement rates suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia earthquake was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support earthquake ground motion as their triggering mechanism. Limit-equilibrium slope-stability modeling suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. Modeling suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone earthquake would trigger formation of the rockslides. Displacement modeling following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff

  9. Magnitudes and Moment-Duration Scaling of Low-Frequency Earthquakes Beneath Southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A.; Rubin, A. M.; Savard, G.; Chuang, L. Y.

    2015-12-01

    We employ 130 low-frequency-earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from 100's to 1000's of individual LFEs, representing over 300,000 independent detections from major episodic-tremor-and- slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P- and S-waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatio-temporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free-surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single ETS template. The spatio-temporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 hours of LFE activity during an ETS episode when tidal sensitity is low. The remainder is released in bursts over several days, particularly as spatially extensive RTRs, during which tidal sensitivity is high. RTR's are characterized by large magnitude LFEs, and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power-law than exponential distributions although they exhibit very high b-values ≥ 6. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges MW<1.5, MW≥ 2.0. LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in dimension and that moment variation is dominated by slip. This behaviour implies

  10. “PLAFKER RULE OF THUMB” RELOADED: EXPERIMENTAL INSIGHTS INTO THE SCALING AND VARIABILITY OF LOCAL TSUNAMIS TRIGGERED BY GREAT SUBDUCTION MEGATHRUST EARTHQUAKES

    NASA Astrophysics Data System (ADS)

    Rosenau, M.; Nerlich, R.; Brune, S.; Oncken, O.

    2009-12-01

    along accretionary margins. Three out of the top-five tsunami hotspots we identify had giant earthquakes in the last decades (Chile 1960, Alaska 1964, Sumatra-Andaman 2004) and one (Sumatra-Mentawai) started in 2005 releasing strain in a possibly moderate mode of sequential large earthquakes. This leaves Cascadia as the major active tsunami hotspot in the focus of tsunami hazard assessment. Visualization of preliminary versions of the experimentally-derived scaling laws for peak nearshore tsunami heigth (PNTH) as functions of forearc slope, peak earthquake slip (left panel) and moment magnitude (right panel). Note that wave breaking is not considered yet. This renders the extrem peaks > 20 m unrealistic.

  11. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    PubMed Central

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10–20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  12. Long-period ocean-bottom motions in the source areas of large subduction earthquakes.

    PubMed

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-11-30

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present.

  13. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    NASA Astrophysics Data System (ADS)

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-11-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present.

  14. Anomalous pre-seismic transmission of VHF-band radio waves resulting from large earthquakes, and its statistical relationship to magnitude of impending earthquakes

    NASA Astrophysics Data System (ADS)

    Moriya, T.; Mogi, T.; Takada, M.

    2010-02-01

    To confirm the relationship between anomalous transmission of VHF-band radio waves and impending earthquakes, we designed a new data-collection system and have documented the anomalous VHF-band radio-wave propagation beyond the line of sight prior to earthquakes since 2002 December in Hokkaido, northern Japan. Anomalous VHF-band radio waves were recorded before two large earthquakes, the Tokachi-oki earthquake (Mj = 8.0, Mj: magnitude defined by the Japan Meteorological Agency) on 2003 September 26 and the southern Rumoi sub-prefecture earthquake (Mj = 6.1) on 2004 December 14. Radio waves transmitted from a given FM radio station are considered to be scattered, such that they could be received by an observation station beyond the line of sight. A linear relationship was established between the logarithm of the total duration time of anomalous transmissions (Te) and the magnitude (M) or maximum seismic intensity (I) of the impending earthquake, for M4-M5 class earthquakes that occurred at depths of 48-54 km beneath the Hidaka Mountains in Hokkaido in 2004 June and 2005 August. Similar linear relationships are also valid for earthquakes that occurred at different depths. The relationship was shifted to longer Te for shallower earthquakes and to shorter Te for deeper ones. Numerous parameters seem to affect Te, including hypocenter depths and surface conditions of epicentral area (i.e. sea or land). This relationship is important because it means that pre-seismic anomalous transmission of VHF-band waves may be useful in predicting the size of an impending earthquake.

  15. Neutrino footprint in large scale structure

    NASA Astrophysics Data System (ADS)

    Garay, Carlos Peña; Verde, Licia; Jimenez, Raul

    2017-03-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.

  16. Discrete Scaling in Earthquake Precursory Phenomena: Evidence in the Kobe Earthquake, Japan

    NASA Astrophysics Data System (ADS)

    Johansen, Anders; Sornette, Didier; Wakita, Hiroshi; Tsunogai, Urumu; Newman, William I.; Saleur, Hubert

    1996-10-01

    We analyze the ion concentration of groundwater issuing from deep wells located near the epicenter of the recent earthquake of magnitude 6.9 near Kobe, Japan, on January 17, 1995. These concentrations are well fitted by log-periodic modulations around a leading power law. The exponent (real and imaginary parts) is very close to those already found for the fits of precursory seismic activity for Loma Prieta and the Aleutian Islands. This brings further support for the general hypothesis that complex critical exponents are a general phenomenon in irreversible self-organizing systems and particularly in rupture and earthquake phenomena. Nous analysons les fluctuations de concentrations ioniques de l'eau issue de puits profonds situés à proximité de l'épicentre du récent tremblement de terre de magnitude 6.9 proche de Kobe au Japon, le 17 janvier 1995. Ces fluctuations sont bien reproduites par des modulations log-périodiques autour d'une loi de puissance. Les parties réelle et imaginaire de l'exposant sont très proches de celles trouvées précédemment pour les tremblements de terre de Loma Prieta et des Iles Aléoutiennes. Ces résultats renforcent l'hypothèse que des exposants critiques complexes sont une propriété générale des phénomènes de croissance irréversible, et en particulier des problèmes de rupture et des tremblements de terre.

  17. The large-scale isolated disturbances dynamics in the main peak of electronic concentration of ionosphere

    NASA Astrophysics Data System (ADS)

    Kalinin, U. K.; Romanchuk, A. A.; Sergeenko, N. P.; Shubin, V. N.

    2003-07-01

    The vertical sounding data at chains of ionosphere stations are used to obtain relative variations of electron concentration in the F2 ionosphere region. Specific isolated traveling large-scale irregularities are distinguished in the diurnal succession of the fcF2 relative variations records. The temporal shifts of the irregularities at the station chains determine their motion velocity (of the order of speed of sound) and spatial scale (of order of 3000-5000km, the trajectory length being up to 10000km). The motion trajectories of large-scale isolated irregularities which had preceded the earthquakes are reconstructed.

  18. Scale up of large ALON windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Balasubramanian, Sreeram; Kashalikar, Uday; Foti, Robyn; Sastri, Suri

    2013-06-01

    Aluminum Oxynitride (ALON® Optical Ceramic) combines broadband transparency with excellent mechanical properties. ALON's cubic structure means that it is transparent in its polycrystalline form, allowing it to be manufactured by conventional powder processing techniques. Surmet has established a robust manufacturing process, beginning with synthesis of ALON® powder, continuing through forming/heat treatment of blanks, and ending with optical fabrication of ALON® windows. Surmet has made significant progress in our production capability in recent years. Additional scale up of Surmet's manufacturing capability, for larger sizes and higher quantities, is currently underway. ALON® transparent armor represents the state of the art in protection against armor piercing threats, offering a factor of two in weight and thickness savings over conventional glass laminates. Tiled and monolithic windows have been successfully produced and tested against a range of threats. Large ALON® window are also of interest to a range of visible to Mid-Wave Infra-Red (MWIR) sensor applications. These applications often have stressing imaging requirements which in turn require that these large windows have optical characteristics including excellent homogeneity of index of refraction and very low stress birefringence. Surmet is currently scaling up its production facility to be able to make and deliver ALON® monolithic windows as large as ~19x36-in. Additionally, Surmet has plans to scale up to windows ~3ftx3ft in size in the coming years. Recent results with scale up and characterization of the resulting blanks will be presented.

  19. Do Large Earthquakes Penetrate below the Seismogenic Zone? Potential Clues from Microseismicity

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Lapusta, N.

    2012-12-01

    It is typically assumed that slip in large earthquakes is confined within the seismogenic zone - often defined by the extent of the background seismicity - with regions below creeping. In terms of rate-and-state friction properties, the locked seismogenic zone and the deeper creeping fault extensions are velocity-weakening (VW) and velocity-strengthening (VS), respectively. Recently, it has been hypothesized that earthquake rupture could penetrate into the deeper creeping regions (Shaw and Wesnousky, BSSA, 2008), and yet it is difficult to detect the deep slip due to limited resolution of source inversions with depth. We hypothesize that absence of concentrated microseismicity at the bottom of the seismogenic zone may point to the existence of deep-penetrating earthquake ruptures. The creeping-locked boundary creates strain and stress concentrations. If it is at the bottom of the VW region, which supports earthquake nucleation, microseismicity should persistently occur at the bottom of the seismogenic zone. Such behavior has been observed on the Parkfield segment of the San Andreas Fault (SAF) and the Calaveras fault. However, such microseismicity would be inhibited if dynamic earthquake rupture penetrates substantially below the VW/VS transition, which would drop stress in the ruptured VS areas, making them effectively locked. Hence the creeping-locked boundary, with its stress concentration, would be located within the VS area, where earthquake nucleation is inhibited. Indeed, microseismicity concentration at the bottom of the seismogenic zone is not observed for several faults that hosted major earthquakes, such as the Carizzo segment of the SAF (the site of 1857 Mw 7.9 Fort Tejon earthquake) and Palu-Lake-Hazar segment of the Eastern Anatolian Fault. We confirm this hypothesis by simulating earthquake sequences and aseismic slip in 3D fault models (Lapusta and Liu, 2009; Noda and Lapusta, 2010). The fault is governed by rate-and-state friction laws, with a VW

  20. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  1. Correlation between Coulomb stress changes imparted by large historical strike-slip earthquakes and current seismicity in Japan

    NASA Astrophysics Data System (ADS)

    Ishibe, Takeo; Shimazaki, Kunihiko; Tsuruoka, Hiroshi; Yamanaka, Yoshiko; Satake, Kenji

    2011-03-01

    To determine whether current seismicity continues to be affected by large historical earthquakes, we investigated the correlation between current seismicity in Japan and the static stress changes in the Coulomb Failure Function (ΔCFF) due to eight large historical earthquakes (since 1923, magnitude ≥ 6.5) with a strike-slip mechanism. The ΔCFF was calculated for two types of receiver faults: the mainshock and the focal mechanisms of recent moderate earthquakes. We found that recent seismicity for the mainshock receiver faults is concentrated in the positive ΔCFF regions of four earthquakes (the 1927 Tango, 1943 Tottori, 1948 Fukui, and 2000 Tottori-Ken Seibu earthquakes), while no such correlations are recognizable for the other four earthquakes (the 1931 Nishi-Saitama, 1963 Wakasa Bay, 1969 Gifu-Ken Chubu, and 1984 Nagano-Ken Seibu earthquakes). The probability distribution of the ΔCFF calculated for the recent focal mechanisms clearly indicates that recent earthquakes concentrate in positive ΔCFF regions, suggesting that the current seismicity may be affected by a number of large historical earthquakes. The proposed correlation between the ΔCFF and recent seismicity may be affected by multiple factors controlling aftershock activity or decay time.

  2. High S-wave attenuation anomalies and ringlike seismogenic structures in the lithosphere beneath Altai: Possible precursors of large earthquakes

    NASA Astrophysics Data System (ADS)

    Kopnichev, Yu. F.; Sokolova, I. N.

    2016-12-01

    This paper addresses inhomogeneities in the short-period S-wave attenuation field in the lithosphere beneath Altai. A technique based on the analysis of the amplitude ratios of Sn and Pn waves is used. High S-wave attenuation areas are identified in the West Altai, which are related to the source zones of recent large earthquakes, viz., the 1990 Zaisan earthquake and the 2003 Chuya earthquake. Associated with the Chuya earthquake, a large ringlike seismogenic structure had been formed since 1976. It is also found that ringlike seismogenic structures are confined to high S-wave attenuation areas unrelated to large historical earthquakes. It is supposed that processes paving the way for strong earthquakes are taking place in these areas. The magnitudes of probable earthquakes are estimated using the earlier derived correlation dependences of the sizes of ringlike seismogenic structures and the threshold values of magnitudes on the energy of principal earthquakes with prevailing focal mechanisms taken into consideration. The sources of some earthquakes are likely to occur near to the planned gas pipeline route from Western Siberia to China, which should be taken into account. The relationship of anomalies in the S-wave attenuation field and the ringlike seismogenic structures to a high content of deep-seated fluids in the lithosphere is discussed.

  3. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide.

    PubMed

    Pollitz, Fred F; Stein, Ross S; Sevilgen, Volkan; Bürgmann, Roland

    2012-10-11

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days, but so far remote aftershocks of moment magnitude M ≥ 5.5 have not been identified, with the lone exception of an M = 6.9 quake remotely triggered by the surface waves from an M = 6.6 quake 4,800 kilometres away. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M ≥ 5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M ≤ 7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10(-7) for at least 100 seconds during dynamic-wave passage. The other M ≥ 8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M ≥ 5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  4. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  5. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide

    USGS Publications Warehouse

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland

    2012-01-01

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  6. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  7. LARGE EARTHQUAKES AND TSUNAMIS AT THE SAMOA CORNER IN THE CONTEXT OF THE 2009 SAMOA EVENT

    NASA Astrophysics Data System (ADS)

    Okal, E.; Kirby, S. H.

    2009-12-01

    We examine the seismic properties of the 2009 Samoa earthquake in the context of its tsunami, the first one in 45 years to cause significant damage on U.S. soil. The event has a normal faulting geometry near the bend ending the 3000-km long Tonga-Kermadec subduction zone. Other large normal faulting tsunamigenic earthquakes include the 1933 Sanriku, 1977 Sumba and 2007 Kuril events. The 2009 Samoa earthquake shares with such intraplate earthquakes a slightly above average E/M0 (THETA = -4.82), but has a more complex geometry, a relatively long duration, and large CLVD (11%). Same-day seismicity appears detached to the SW of the fault plane, and 7 out of the 8 CMT regional solutions following the main shock are rotated at least 69 deg. away from its own mechanism. This points out to a mechanism of stress transfer rather than genuine aftershocks, in a pattern reminiscent of the 1933 Sanriku earthquake. Most of the seismic moment release around the Samoa corner involves normal faulting. To the South (16.5-18 deg. S; 1975, 1978, 1987, 2006), solutions consistently feature a typical intraplate lithospheric break. To the NW (15.5 deg. S), the 1981 event features a tear in the plate along Govers and Wortel's [2005] STEP model. The 2009 event is more complex, apparently involving rupture along a quasi-NS plane. An event presumably similar to 2009 took place on 26 June 1917, for which there is a report of a 12-m tsunami at Pago Pago. That event relocates 200 km to the NW, but its error ellipse includes the 2009 epicenter. The 1917 moment, tentatively 1.3 10**28 dyn*cm, is comparable to 2009. As suggested by Solov'ev and Go [1984], the report of a 12-m wave in Samoa during the 01 May 1917 Kermadec earthquake is most probably erroneous. We will present studies of the other large earthquakes of the past century in the area, notably the confirmed tsunamigenic events of 01 Sep. 1981 (damage on Savaii), 26 Dec 1975 (24 cm at PPG), 02 Apr 1977 (12 cm at PPG), 06 Oct 1987 and 07

  8. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  9. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  10. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  11. Large scale preparation of pure phycobiliproteins.

    PubMed

    Padgett, M P; Krogmann, D W

    1987-01-01

    This paper describes simple procedures for the purification of large amounts of phycocyanin and allophycocyanin from the cyanobacterium Microcystis aeruginosa. A homogeneous natural bloom of this organism provided hundreds of kilograms of cells. Large samples of cells were broken by freezing and thawing. Repeated extraction of the broken cells with distilled water released phycocyanin first, then allophycocyanin, and provides supporting evidence for the current models of phycobilisome structure. The very low ionic strength of the aqueous extracts allowed allophycocyanin release in a particulate form so that this protein could be easily concentrated by centrifugation. Other proteins in the extract were enriched and concentrated by large scale membrane filtration. The biliproteins were purified to homogeneity by chromatography on DEAE cellulose. Purity was established by HPLC and by N-terminal amino acid sequence analysis. The proteins were examined for stability at various pHs and exposures to visible light.

  12. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the

  13. A large silent earthquake and the future rupture of the Guerrero seismic

    NASA Astrophysics Data System (ADS)

    Kostoglodov, V.; Lowry, A.; Singh, S.; Larson, K.; Santiago, J.; Franco, S.; Bilham, R.

    2003-04-01

    The largest global earthquakes typically occur at subduction zones, at the seismogenic boundary between two colliding tectonic plates. These earthquakes release elastic strains accumulated over many decades of plate motion. Forecasts of these events have large errors resulting from poor knowledge of the seismic cycle. The discovery of slow slip events or "silent earthquakes" in Japan, Alaska, Cascadia and Mexico provides a new glimmer of hope. In these subduction zones, the seismogenic part of the plate interface is loading not steadily as hitherto believed, but incrementally, partitioning the stress buildup with the slow slip events. If slow aseismic slip is limited to the region downdip of the future rupture zone, slip events may increase the stress at the base of the seismogenic region, incrementing it closer to failure. However if some aseismic slip occurs on the future rupture zone, the partitioning may significantly reduce the stress buildup rate (SBR) and delay a future large earthquake. Here we report characteristics of the largest slow earthquake observed to date (Mw 7.5), and its implications for future failure of the Guerrero seismic gap, Mexico. The silent earthquake began in October 2001 and lasted for 6-7 months. Slow slip produced measurable displacements over an area of 550x250 km2. Average slip on the interface was about 10 cm and the equivalent magnitude, Mw, was 7.5. A shallow subhorizontal configuration of the plate interface in Guererro is a controlling factor for the physical conditions favorable for such extensive slow slip. The total coupled zone in Guerrero is 120-170 km wide while the seismogenic, shallowest portion is only 50 km. This future rupture zone may slip contemporaneously with the deeper aseismic sleep, thereby reducing SBR. The slip partitioning between seismogenic and transition coupled zones may diminish SBR up to 50%. These two factors are probably responsible for a long (at least since 1911) quiet on the Guerrero seismic gap

  14. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.

  15. A new paradigm for large earthquakes in stable continental plate interiors

    NASA Astrophysics Data System (ADS)

    Calais, E.; Camelbeeck, T.; Stein, S.; Liu, M.; Craig, T. J.

    2016-10-01

    Large earthquakes within stable continental regions (SCR) show that significant amounts of elastic strain can be released on geological structures far from plate boundary faults, where the vast majority of the Earth's seismic activity takes place. SCR earthquakes show spatial and temporal patterns that differ from those at plate boundaries and occur in regions where tectonic loading rates are negligible. However, in the absence of a more appropriate model, they are traditionally viewed as analogous to their plate boundary counterparts, occurring when the accrual of tectonic stress localized at long-lived active faults reaches failure threshold. Here we argue that SCR earthquakes are better explained by transient perturbations of local stress or fault strength that release elastic energy from a prestressed lithosphere. As a result, SCR earthquakes can occur in regions with no previous seismicity and no surface evidence for strain accumulation. They need not repeat, since the tectonic loading rate is close to zero. Therefore, concepts of recurrence time or fault slip rate do not apply. As a consequence, seismic hazard in SCRs is likely more spatially distributed than indicated by paleoearthquakes, current seismicity, or geodetic strain rates.

  16. Documenting large earthquakes similar to the 2011 Tohoku-oki earthquake from sediments deposited in the Japan Trench over the past 1500 years

    NASA Astrophysics Data System (ADS)

    Ikehara, Ken; Kanamatsu, Toshiya; Nagahashi, Yoshitaka; Strasser, Michael; Fink, Hiske; Usami, Kazuko; Irino, Tomohisa; Wefer, Gerold

    2016-07-01

    The 2011 Tohoku-oki earthquake and tsunami was the most destructive geohazard in Japanese history. However, little is known of the past recurrence of large earthquakes along the Japan Trench. Deep-sea turbidites are potential candidates for understanding the history of such earthquakes. Core samples were collected from three thick turbidite units on the Japan Trench floor near the epicenter of the 2011 event. The uppermost unit (Unit TT1) consists of amalgamated diatomaceous mud (30-60 cm thick) that deposited from turbidity currents triggered by shallow subsurface instability on the lower trench slope associated with strong ground motion during the 2011 Tohoku-oki earthquake. Older thick turbidite units (Units TT2 and TT3) also consist of several amalgamated subunits that contain thick sand layers in their lower parts. Sedimentological characteristics and tectonic and bathymetric settings of the Japan Trench floor indicate that these turbidites also originated from two older large earthquakes of potentially similar to the 2011 Tohoku-oki earthquake. A thin tephra layer between Units TT2 and TT3 constrains the age of these earthquakes. Geochemical analysis of volcanic glass shards within the tephra layer indicates that it is correlative to the Towada-a tephra (AD 915) from the Towada volcano in northeastern Japan. The stratigraphy of the Japan Trench turbidites resembles that of onshore tsunami deposits on the Sendai and Ishinomaki plains, indicating that the cored uppermost succession of the Japan Trench comprises a 1500-yr-old record that includes the sedimentary fingerprint of the historical Jogan earthquake of AD 869.

  17. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  18. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; Marriage, Tobias; McMahon, Jeff; Miller, Nathan; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2017-01-01

    The Cosmology Large Angular Scale Surveryor (CLASS) is a ground based telescope array designed to measure the large-angular scale polarization signal of the Cosmic Microwave Background (CMB). The large-angular scale CMB polarization measurement is essential for a precise determination of the optical depth to reionization (from the E-mode polarization) and a characterization of inflation from the predicted polarization pattern imprinted on the CMB by gravitational waves in the early universe (from the B-mode polarization). CLASS will characterize the primordial tensor-to-scalar ratio, r, to 0.01 (95% CL).CLASS is uniquely designed to be sensitive to the primordial B-mode signal across the entire range of angular scales where it could possibly dominate over the lensing signal that converts E-modes to B-modes while also making multi-frequency observations both high and low of the frequency where the CMB-to-foreground signal ratio is at its maximum. The design enables CLASS to make a definitive cosmic-variance-limited measurement of the optical depth to scattering from reionization.CLASS is an array of 4 telescopes operating at approximately 40, 90, 150, and 220 GHz. CLASS is located high in the Andes mountains in the Atacama Desert of northern Chile. The location of the CLASS site at high altitude near the equator minimizes atmospheric emission while allowing for daily mapping of ~70% of the sky.A rapid front end Variable-delay Polarization Modulator (VPM) and low noise Transition Edge Sensor (TES) detectors allow for a high sensitivity and low systematic error mapping of the CMB polarization at large angular scales. The VPM, detectors and their coupling structures were all uniquely designed and built for CLASS.We present here an overview of the CLASS scientific strategy, instrument design, and current progress. Particular attention is given to the development and status of the Q-band receiver currently surveying the sky from the Atacama Desert and the development of

  19. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    NASA Astrophysics Data System (ADS)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  20. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  1. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; McMahon, Jeff; Miller, Nathan T.; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián.; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2016-07-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  2. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  3. Mittigating the effects of large subduction-zone earthquakes in Western Sumatra

    NASA Astrophysics Data System (ADS)

    Sieh, K.; Stebbins, C.; Natawidjaja, D. H.; Suwargadi, B. W.

    2004-12-01

    No giant earthquakes have struck the outer-arc islands of western Sumatra since the sequence of 1797, 1833 and 1861. Paleoseismic studies of coral microatolls reveal that failure of the subduction interface occurs in clusters of such earthquakes about every 230 years. Thus, the next such sequence may well be no more than a few decades away. In the meantime, GPS measurements and paleogeodetic observations show that the islands continue to submerge, dragged down by the downgoing oceanic slab, in preparation for the next failures of the subduction interface. Uplift of the islands and seafloor one to two meters during large events leads to large tsunamis and substantial changes in the coastal environments of the islands, including the seaward retreat of fringing reef, beach and mangrove environments. Having spent a decade characterizing the seismic history of western coastal Sumatra, we are now beginning to work with the inhabitants of the islands and the mainland coast to mitigate the associated hazards. Thus far, we have begun to creat and distribute posters and brochures aimed at educating the islanders about their natural tectonic environment and guiding them in preparing for future large earthquakes and tsunamis. We are also installing a continuous GPS network, in order to monitor ongoing strain accumulation and possible transients.

  4. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  5. Numerical modeling of the deformations associated with large subduction earthquakes through the seismic cycle

    NASA Astrophysics Data System (ADS)

    Fleitout, L.; Trubienko, O.; Garaud, J.; Vigny, C.; Cailletaud, G.; Simons, W. J.; Satirapod, C.; Shestakov, N.

    2012-12-01

    A 3D finite element code (Zebulon-Zset) is used to model deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes: Sumatra, Japan and Chile. The mesh featuring a broad spherical shell portion with a viscoelastic asthenosphere is refined close to the subduction zones. The model is constrained by 6 years of postseismic data in Sumatra area and over a year of data for Japan and Chile plus preseismic data in the three areas. The coseismic displacements on the subduction plane are inverted from the coseismic displacements using the finite element program and provide the initial stresses. The predicted horizontal postseismic displacements depend upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. Non-dimensionalized by the coseismic displacements, they present an almost uniform value between 500km and 1500km from the trench for elastic plates 80km thick. The time evolution of the velocities is function of the creep law (Maxwell, Burger or power-law creep). Moreover, the forward models predict a sizable far-field subsidence, also with a spatial distribution which varies with the geometry of the asthenosphere and lithosphere. Slip on the subduction interface does not induce such a subsidence. The observed horizontal velocities, divided by the coseismic displacement, present a similar pattern as function of time and distance from trench for the three areas, indicative of similar lithospheric and asthenospheric thicknesses and asthenospheric viscosity. This pattern cannot be fitted with power-law creep in the asthenosphere but indicates a lithosphere 60 to 90km thick and an asthenosphere of thickness of the order of 100km with a burger rheology represented by a Kelvin-Voigt element with a viscosity of 3.1018Pas and μKelvin=μelastic/3. A second Kelvin-Voigt element with very limited amplitude may explain some characteristics of the short time-scale signal. The postseismic subsidence is

  6. Large-scale optimization of neuron arbors

    NASA Astrophysics Data System (ADS)

    Cherniak, Christopher; Changizi, Mark; Won Kang, Du

    1999-05-01

    At the global as well as local scales, some of the geometry of types of neuron arbors-both dendrites and axons-appears to be self-organizing: Their morphogenesis behaves like flowing water, that is, fluid dynamically; waterflow in branching networks in turn acts like a tree composed of cords under tension, that is, vector mechanically. Branch diameters and angles and junction sites conform significantly to this model. The result is that such neuron tree samples globally minimize their total volume-rather than, for example, surface area or branch length. In addition, the arbors perform well at generating the cheapest topology interconnecting their terminals: their large-scale layouts are among the best of all such possible connecting patterns, approaching 5% of optimum. This model also applies comparably to arterial and river networks.

  7. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  8. Potential for Large Transpressional Earthquakes along the Santa Cruz-Catalina Ridge, California Continental Borderland

    NASA Astrophysics Data System (ADS)

    Legg, M.; Kohler, M. D.; Weeraratne, D. S.; Castillo, C. M.

    2015-12-01

    Transpressional fault systems comprise networks of high-angle strike-slip and more gently-dipping oblique-slip faults. Large oblique-slip earthquakes may involve complex ruptures of multiple faults with both strike-slip and dip-slip. Geophysical data including high-resolution multibeam bathymetry maps, multichannel seismic reflection (MCS) profiles, and relocated seismicity catalogs enable detailed mapping of the 3-D structure of seismogenic fault systems offshore in the California Continental Borderland. Seafloor morphology along the San Clemente fault system displays numerous features associated with active strike-slip faulting including scarps, linear ridges and valleys, and offset channels. Detailed maps of the seafloor faulting have been produced along more than 400 km of the fault zone. Interpretation of fault geometry has been extended to shallow crustal depths using 2-D MCS profiles and to seismogenic depths using catalogs of relocated southern California seismicity. We examine the 3-D fault character along the transpressional Santa Cruz-Catalina Ridge (SCCR) section of the fault system to investigate the potential for large earthquakes involving multi-fault ruptures. The 1981 Santa Barbara Island (M6.0) earthquake was a right-slip event on a vertical fault zone along the northeast flank of the SCCR. Aftershock hypocenters define at least three sub-parallel high-angle fault surfaces that lie beneath a hillside valley. Mainshock rupture for this moderate earthquake appears to have been bilateral, initiating at a small discontinuity in the fault geometry (~5-km pressure ridge) near Kidney Bank. The rupture terminated to the southeast at a significant releasing step-over or bend and to the northeast within a small (~10-km) restraining bend. An aftershock cluster occurred beyond the southeast asperity along the East San Clemente fault. Active transpression is manifest by reverse-slip earthquakes located in the region adjacent to the principal displacement zone

  9. Chronology of historical tsunamis in Mexico and its relation to large earthquakes along the subduction zone

    NASA Astrophysics Data System (ADS)

    Suarez, G.; Mortera, C.

    2013-05-01

    The chronology of historical earthquakes along the subduction zone in Mexico spans a time period of approximately 400 years. Although the population density along the coast of Mexico has always been low, relative to that of central Mexico, several of the large subduction earthquakes reports include references to the presence of tsunamis invading the southern coast of Mexico. Here we present a chronology of historical tsunamis affecting the Pacific coast of Mexico and compare this with the historical record of subduction events and to the existing Mexican and worldwide catalogs of tsunamis in the Pacific basin. Due to the geographical orientation of the Pacific coat of Mexico, tsunamis generated on the other subduction zones of the Pacific have not had damaging effects in the country. Among the tsunamis generated by local earthquakes, the largest one by far is the one produced by the earthquake of 28 March 1787. The reported tsunami has an inundation area that reaches for over 6 km inland. The length of the coast where the tsunami was reported extends for over 450 km. In the last 100 years two large tsunamis have been reported along the Pacific coast of Mexico. On 22 June 1932 a tsunami with reported wave heights of up to 11 m hit the coast of Jalisco and Colima. The town of Cuyutlan was heavily damaged and approximately 50 people lost their lives do to the impact of the tsunami. This unusual tsunami was generated by an aftershock (M 6.9) of the large 3 June 1932 event (M 8.1). The main shock of 3 June did not produce a perceptible tsunami. It has been proposed that the 22 June event is a tsunami earthquake generated on the shallow part of the subduction zone. On 16 November 1925 an unusual tsunami was reported in the town of Zihuatanejo in the state of Guerrero, Mexico. No earthquake on the Pacific rim occurs at the same time as this tsunami and the historical record of hurricanes and tropical storms do not list the presence of a meteorological disturbance that

  10. Voids in the Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    El-Ad, Hagai; Piran, Tsvi

    1997-12-01

    Voids are the most prominent feature of the large-scale structure of the universe. Still, their incorporation into quantitative analysis of it has been relatively recent, owing essentially to the lack of an objective tool to identify the voids and to quantify them. To overcome this, we present here the VOID FINDER algorithm, a novel tool for objectively quantifying voids in the galaxy distribution. The algorithm first classifies galaxies as either wall galaxies or field galaxies. Then, it identifies voids in the wall-galaxy distribution. Voids are defined as continuous volumes that do not contain any wall galaxies. The voids must be thicker than an adjustable limit, which is refined in successive iterations. In this way, we identify the same regions that would be recognized as voids by the eye. Small breaches in the walls are ignored, avoiding artificial connections between neighboring voids. We test the algorithm using Voronoi tesselations. By appropriate scaling of the parameters with the selection function, we apply it to two redshift surveys, the dense SSRS2 and the full-sky IRAS 1.2 Jy. Both surveys show similar properties: ~50% of the volume is filled by voids. The voids have a scale of at least 40 h-1 Mpc and an average -0.9 underdensity. Faint galaxies do not fill the voids, but they do populate them more than bright ones. These results suggest that both optically and IRAS-selected galaxies delineate the same large-scale structure. Comparison with the recovered mass distribution further suggests that the observed voids in the galaxy distribution correspond well to underdense regions in the mass distribution. This confirms the gravitational origin of the voids.

  11. Financial earthquakes, aftershocks and scaling in emerging stock markets

    NASA Astrophysics Data System (ADS)

    Selçuk, Faruk

    2004-02-01

    This paper provides evidence for scaling laws in emerging stock markets. Estimated parameters using different definitions of volatility show that the empirical scaling law in every stock market is a power law. This power law holds from 2 to 240 business days (almost 1 year). The scaling parameter in these economies changes after a change in the definition of volatility. This finding indicates that the stock returns may have a multifractal nature. Another scaling property of stock returns is examined by relating the time after a main shock to the number of aftershocks per unit time. The empirical findings show that after a major fall in the stock returns, the stock market volatility above a certain threshold shows a power law decay, described by Omori's law.

  12. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  13. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  14. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  15. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  16. Large-scale Heterogeneous Network Data Analysis

    DTIC Science & Technology

    2012-07-31

    Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo).  Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster)  Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio).  Hsun-Ping Hsieh , Cheng-Te Li, and Shou

  17. Primer design for large scale sequencing.

    PubMed

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-06-15

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects.

  18. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  19. Primer design for large scale sequencing.

    PubMed Central

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-01-01

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects. PMID:9611248

  20. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  1. Hayward Fault: A 50-km-long Locked Patch Regulates Its Large Earthquake Cycle (Invited)

    NASA Astrophysics Data System (ADS)

    Lienkaemper, J. J.; Simpson, R. W.; Williams, P. L.; McFarland, F. S.; Caskey, S. J.

    2010-12-01

    We have documented a chronology of 11 paleoearthquakes on the southern Hayward fault (HS) preceding the Mw6.8, 1868 earthquake. These large earthquakes were both regular and frequent, as indicated by a 0.40 coefficient of variation and mean recurrence interval (MRI) of 161 ± 65 yr (1σ of recurrence intervals). Furthermore, the Oxcal-modeled probability distribution for the average interval resembles a Gaussian rather than a more irregular Brownian passage time distribution. Our revised 3D-modeling of subsurface creep, using newly updated long-term creep rates, now suggests there is only one ~50-km-long locked patch (instead of two), confined laterally between two large patches of deep creep (≥9 km), with an extent consistent with evidence for the 1868 rupture. This locked patch and the fault’s lowest rates of surface creep are approximately centered on HS’s largest bend and a large gabbro body, particularly where the gabbro forms both east and west faces of the fault. We suggest that this locked patch serves as a mechanical capacitor, limiting earthquake size and frequency. The moment accumulation over 161 yr summed on all locked elements of the model reaches Mw6.79, but if half of the moment stored in the creeping elements were to fail dynamically, Mw could reach 6.91. The paleoearthquake histories for nearby faults of the San Francisco Bay region appear to indicate less regular and frequent earthquakes, possibly because most lack the high proportion (40-60%) of aseismic release found on the Hayward fault. The northernmost Hayward fault and Rodgers Creek fault (RCF) appear to rupture only half as frequently as the HS and are separated from the HS by a creep buffer and 5-km wide releasing bend respectively, both tending to limit through-going ruptures. The paleoseismic record allows multi-segment, Hayward fault-RCF ruptures, but does not require it. The 1868 HS rupture preceded the 1906 multi-segmented San Andreas fault (SAF) rupture, perhaps because the HS

  2. Principles for selecting earthquake motions in engineering design of large dams

    USGS Publications Warehouse

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  3. Potential for a large earthquake rupture of the San Ramón fault in Santiago, Chile

    NASA Astrophysics Data System (ADS)

    Vargas Easton, G.; Klinger, Y.; Rockwell, T. K.; Forman, S. L.; Rebolledo, S.; Lacassin, R.; Armijo, R.

    2013-12-01

    The San Ramón fault is an active west-vergent thrust fault system located along the eastern border of Santiago, capital of Chile, at the foot of the main Andes Cordillera. This is part of the continental-scale West Andean Thrust, at the western slope of the Andean orogen. The fault system is constituted by fault segments in the order of 10-15 km length, evidenced by conspicuous 3-over 100 m height fault scarps systematically located along the fault trace. This evidence Quaternary faulting activity, which together with the geometry, structure and geochronological data support slip rate estimations in the order of ~0.4 mm/year. To probe seismic potential for the west flank of the Andes in front of Santiago, we excavated and analyzed a trench across a prominent-young fault scarp. Together with geochronological data from Optically Stimulated Luminiscence complemented by radiocarbon ages, our paleoseismic results demonstrate recurrent late Quaternary faulting along this structure, with nearly 5 m of displacement in each event. With the last large earthquake nearly 8,000-9,000 years ago and two ruptures within the past 17,000-19,000 years ago, the San Ramon fault appears ripe for another large earthquake up to M7.5 in the near future, making Santiago another major world city at significant risk.

  4. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake.

    PubMed

    Tsuji, Takeshi; Ishibashi, Jun'ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-02-20

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations.

  5. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake

    PubMed Central

    Tsuji, Takeshi; Ishibashi, Jun’ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-01-01

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations. PMID:28218298

  6. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Tsuji, Takeshi; Ishibashi, Jun’Ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-02-01

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations.

  7. Patterns of Seismicity Characterizing the Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Yoder, M. R.; Holliday, J. R.; Schultz, K.; Wilson, J. M.; Donnellan, A.; Grant Ludwig, L.

    2015-12-01

    A number of methods to calculate probabilities of major earthquakes have recently been proposed. Most of these methods depend upon understanding patterns of small earthquakes preceding the large events. For example, the Natural Time Weibull method for earthquake forecasting (see www.openhazards.com) is based on the assumption that large earthquakes complete the Gutenberg-Richter scaling relation defined by the smallest earthquakes. Here we examine the scaling patterns of small earthquakes having magnitudes between cycles of large earthquakes. For example, in the region of California-Nevada between longitudes -130 to -114 degrees W, and latitudes 32 to 45 degrees North, we find 79 earthquakes having magnitudes M6 during the time interval 1933 - present, culminating with the most recent event, the M6.0 Napa, California earthquake of August 24, 2014. Thus we have 78 complete cycles of large earthquakes in this region. After compiling and stacking the smaller events occurring between the large events, we find a characteristic pattern of scaling for the smaller events. This pattern shows a scaling relation for the smallest earthquakes up to about 3earthquakes for 4.5scaling line are 0.85 for the entire interval 1933- present. Extrapolation of the small-magnitude scaling line indicates that the average cycle tends to be completed by a large earthquake having M~6.4. In addition, statistics indicate that departure of the successive earthquake cycles from their average pattern can be characterized by Coefficients of Variability and other measures. We discuss these ideas and apply them not only to California, but also to other seismically active areas in the world

  8. The Validity and Reliability Work of the Scale That Determines the Level of the Trauma after the Earthquake

    ERIC Educational Resources Information Center

    Tanhan, Fuat; Kayri, Murat

    2013-01-01

    In this study, it was aimed to develop a short, comprehensible, easy, applicable, and appropriate for cultural characteristics scale that can be evaluated in mental traumas concerning earthquake. The universe of the research consisted of all individuals living under the effects of the earthquakes which occurred in Tabanli Village on 23.10.2011 and…

  9. FINITE FAULT MODELING OF FUTURE LARGE EARTHQUAKE FROM NORTH TEHRAN FAULT IN KARAJ, IRAN

    NASA Astrophysics Data System (ADS)

    Samaei, Meghdad; Miyajima, Masakatsu; Saffari, Hamid; Tsurugi, Masato

    The main purpose of this study is to predict strong ground motions from future large earthquake for Karaj city, the capital of Alborz province of Iran. This city is an industrialized city having over one million populations and is located near several active faults. Finite fault modeling with a dynamic corner frequency has adopted here for simulation of future large earthquake. Target fault is North Tehran fault with the length of 110 km and rupture of west part of the fault which is closest to Karaj, assumed for this simulation. For seven rupture starting points, acceleration time series in the site of Karaj Caravansary -historical building- are predicted. Peak ground accelerations for those are vary from 423 cm/s2 to 584 cm/s2 which is in the range of 1990 Rudbar earthquake (Mw=7.3) . Results of acceleration simulations in different distances are also compared with attenuation relations for two types of soil. Our simulations show general agreement with one of the most well known world attenuation relations and also with one of the newest attenuation relation that hase developed for Iranian plateau.

  10. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  11. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  12. Large-scale Globally Propagating Coronal Waves.

    PubMed

    Warmuth, Alexander

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  13. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  14. Fault Interactions and Large Complex Earthquakes in the Los Angeles Area

    USGS Publications Warehouse

    Anderson, G.; Aagaard, B.; Hudnut, K.

    2003-01-01

    Faults in complex tectonic environments interact in various ways, including triggered rupture of one fault by another, that may increase seismic hazard in the surrounding region. We model static and dynamic fault interactions between the strike-slip and thrust fault systems in southern California. We find that rupture of the Sierra Madre-Cucamonga thrust fault system is unlikely to trigger rupture of the San Andreas or San Jacinto strike-slip faults. However, a large northern San Jacinto fault earthquake could trigger a cascading rupture of the Sierra Madre-Cucamonga system, potentially causing a moment magnitude 7.5 to 7.8 earthquake on the edge of the Los Angeles metropolitan region.

  15. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  16. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  17. Earthquake Interactions at Different Scales: an Example from Eastern California and Western Nevada, USA.

    NASA Astrophysics Data System (ADS)

    Verdecchia, A.; Carena, S.

    2015-12-01

    Earthquakes in diffuse plate boundaries occur in spatially and temporally complex patterns. The region east of the Sierra Nevada that encompasses the northern Eastern California Shear Zone (ECSZ), Walker Lane (WL), and the westernmost part of the Basin and Range province (B&R) is such a kind of plate boundary. In order to better understand the relationship between moderate-to major earthquakes in this area, we modeled the evolution of coseismic, postseismic and interseismic Coulomb stress changes (∆CFS) in this region at two different spatio-temporal scales. In the first example we examined seven historical and instrumental Mw ≥ 6 earthquakes that struck the region around Owens Valley (northern ECSZ) in the last 150 years. In the second example we expanded our study area to all of the northern ECSZ, WL and western B&R, examining seventeen paleoseismological and historical major surface-rupturing earthquakes (Mw ≥ 6.5) that occurred in the last 1400 years. We show that in both cases the majority of the studied events (100% in the first case and 80% in the second) are located in areas of combined coseismic and postseismic positive ∆CFS. This relationship is robust, as shown by control tests with random earthquake sequences. We also show that the White Mountain fault has accumulated up to 30 bars of total ∆CFS (coseismic + postseismic + interseismic) in the last 150 years, and the Hunter Mountain, Fish Lake Valley, Black Mountain, and Pyramid Lake faults have accumulated 40, 45, 54 and 37 bars respectively in the last 1400 years. Such values are comparable to the average stress drop in a major earthquake, and all these faults may be therefore close to failure.

  18. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  19. Improving Recent Large-Scale Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Cardoso, Rogerio Fernando; Ransom, S.

    2011-01-01

    Pulsars are unique in that they act as celestial laboratories for precise tests of gravity and other extreme physics (Kramer 2004). There are approximately 2000 known pulsars today, which is less than ten percent of pulsars in the Milky Way according to theoretical models (Lorimer 2004). Out of these 2000 known pulsars, approximately ten percent are known millisecond pulsars, objects used for their period stability for detailed physics tests and searches for gravitational radiation (Lorimer 2008). As the field and instrumentation progress, pulsar astronomers attempt to overcome observational biases and detect new pulsars, consequently discovering new millisecond pulsars. We attempt to improve large scale pulsar surveys by examining three recent pulsar surveys. The first, the Green Bank Telescope 350MHz Drift Scan, a low frequency isotropic survey of the northern sky, has yielded a large number of candidates that were visually inspected and identified, resulting in over 34.000 thousands candidates viewed, dozens of detections of known pulsars, and the discovery of a new low-flux pulsar, PSRJ1911+22. The second, the PALFA survey, is a high frequency survey of the galactic plane with the Arecibo telescope. We created a processing pipeline for the PALFA survey at the National Radio Astronomy Observatory in Charlottesville- VA, in addition to making needed modifications upon advice from the PALFA consortium. The third survey examined is a new GBT 820MHz survey devoted to find new millisecond pulsars by observing the target-rich environment of unidentified sources in the FERMI LAT catalogue. By approaching these three pulsar surveys at different stages, we seek to improve the success rates of large scale surveys, and hence the possibility for ground-breaking work in both basic physics and astrophysics.

  20. Foreshock patterns preceding large earthquakes in the subduction zone of Chile

    NASA Astrophysics Data System (ADS)

    Minadakis, George; Papadopoulos, Gerassimos A.

    2016-04-01

    Some of the largest earthquakes in the globe occur in the subduction zone of Chile. Therefore, it is of particular interest to investigate foreshock patterns preceding such earthquakes. Foreshocks in Chile were recognized as early as 1960. In fact, the giant (Mw9.5) earthquake of 22 May 1960, which was the largest ever instrumentally recorded, was preceded by 45 foreshocks in a time period of 33h before the mainshock, while 250 aftershocks were recorded in a 33h time period after the mainshock. Four foreshocks were bigger than magnitude 7.0, including a magnitude 7.9 on May 21 that caused severe damage in the Concepcion area. More recently, Brodsky and Lay (2014) and Bedford et al. (2015) reported on foreshock activity before the 1 April 2014 large earthquake (Mw8.2). However, 3-D foreshock patterns in space, time and size were not studied in depth so far. Since such studies require for good seismic catalogues to be available, we have investigated 3-D foreshock patterns only before the recent, very large mainshocks occurring on 27 February 2010 (Mw 8.8), 1 April 2014 (Mw8.2) and 16 September 2015 (Mw8.4). Although our analysis does not depend on a priori definition of short-term foreshocks, our interest focuses in the short-term time frame, that is in the last 5-6 months before the mainshock. The analysis of the 2014 event showed an excellent foreshock sequence consisting by an early-weak foreshock stage lasting for about 1.8 months and by a main-strong precursory foreshock stage that was evolved in the last 18 days before the mainshock. During the strong foreshock period the seismicity concentrated around the mainshock epicenter in a critical area of about 65 km mainly along the trench domain to the south of the mainshock epicenter. At the same time, the activity rate increased dramatically, the b-value dropped and the mean magnitude increased significantly, while the level of seismic energy released also increased. In view of these highly significant seismicity

  1. Stress changes, focal mechanisms, and earthquake scaling laws for the 2000 dike at Miyakejima (Japan)

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Cesca, Simone; Aoki, Yosuke

    2015-06-01

    Faulting processes in volcanic areas result from a complex interaction of pressurized fluid-filled cracks and conduits with the host rock and local and regional tectonic setting. Often, volcanic seismicity is difficult to decipher in terms of the physical processes involved, and there is a need for models relating the mechanics of volcanic sources to observations. Here we use focal mechanism data of the energetic swarm induced by the 2000 dike intrusion at Miyakejima (Izu Archipelago, Japan), to study the relation between the 3-D dike-induced stresses and the characteristics of the seismicity. We perform a clustering analysis on the focal mechanism (FM) solutions and relate them to the dike stress field and to the scaling relationships of the earthquakes. We find that the strike and rake angles of the FMs are strongly correlated and cluster on bands in a strike-rake plot. We suggest that this is consistent with optimally oriented faults according to the expected pattern of Coulomb stress changes. We calculate the frequency-size distribution of the clustered sets finding that focal mechanisms with a large strike-slip component are consistent with the Gutenberg-Richter relation with a b value of about 1. Conversely, events with large normal faulting components deviate from the Gutenberg-Richter distribution with a marked roll-off on its right-hand tail, suggesting a lack of large-magnitude events (Mw > 5.5). This may result from the interplay of the limited thickness and lower rock strength of the layer of rock above the dike, where normal faulting is expected, and lower stress levels linked to the faulting style and low confining pressure.

  2. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  3. Large-scale parametric survival analysis.

    PubMed

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  4. Efficient, large scale separation of coal macerals

    SciTech Connect

    Dyrkacz, G.R.; Bloomquist, C.A.A.

    1988-01-01

    The authors believe that the separation of macerals by continuous flow centrifugation offers a simple technique for the large scale separation of macerals. With relatively little cost (/approximately/ $10K), it provides an opportunity for obtaining quite pure maceral fractions. Although they have not completely worked out all the nuances of this separation system, they believe that the problems they have indicated can be minimized to pose only minor inconvenience. It cannot be said that this system completely bypasses the disagreeable tedium or time involved in separating macerals, nor will it by itself overcome the mental inertia required to make maceral separation an accepted necessary fact in fundamental coal science. However, they find their particular brand of continuous flow centrifugation is considerably faster than sink/float separation, can provide a good quality product with even one separation cycle, and permits the handling of more material than a conventional sink/float centrifuge separation.

  5. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  6. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  7. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  8. Modeling the Internet's large-scale topology

    PubMed Central

    Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László

    2002-01-01

    Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484

  9. Suppression of large earthquakes by stress shadows: A comparison of Coulomb and rate-and-state failure

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.; Simpson, Robert W.

    1998-10-01

    Stress shadows generated by California's two most recent great earthquakes (1857 Fort Tejon and 1906 San Francisco) substantially modified 19th and 20th century earthquake history in the Los Angeles basin and in the San Francisco Bay area. Simple Coulomb failure calculations, which assume that earthquakes can be modeled as static dislocations in an elastic half-space, have done quite well at approximating how long the stress shadows, or relaxing effects, should last and at predicting where subsequent large earthquakes will not occur. There has, however, been at least one apparent exception to the predictions of such simple models. The 1911 M>6.0 earthquake near Morgan Hill, California, occurred at a relaxed site on the Calaveras fault. We examine how the more complex rate-and-state friction formalism based on laboratory experiments might have allowed the 1911 earthquake. Rate-and-state time-to-failure calculations are consistent with the occurrence of the 1911 event just 5 years after 1906 if the Calaveras fault was already close to failure before the effects of 1906. We also examine the likelihood that the entire 78 years of relative quiet (only four M≥6 earthquakes) in the bay area after 1906 is consistent with rate-and-state assumptions, given that the previous 7 decades produced 18 M≥6 earthquakes. Combinations of rate-and-state variables can be found that are consistent with this pattern of large bay area earthquakes, assuming that the rate of earthquakes in the 7 decades before 1906 would have continued had 1906 not occurred. These results demonstrate that rate-and-state offers a consistent explanation for the 78-year quiescence and the 1911 anomaly, although they do not rule out several alternate explanations.

  10. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  11. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  12. Study of the Seismic Cycle of large Earthquakes in central Peru: Lima Region

    NASA Astrophysics Data System (ADS)

    Norabuena, E. O.; Quiroz, W.; Dixon, T. H.

    2009-12-01

    Since historical times, the Peruvian subduction zone has been source of large and destructive earthquakes. The more damaging one occurred on May 30 1970 offshore Peru’s northern city of Chimbote with a death toll of 70,000 people and several hundred US million dollars in property damage. More recently, three contiguous plate interface segments in southern Peru completed their seismic cycle generating the 1996 Nazca (Mw 7.1), the 2001 Atico-Arequipa (Mw 8.4) and the 2007 Pisco (Mw 7.9) earthquakes. GPS measurements obtained between 1994-2001 by IGP-CIW an University of Miami-RSMAS on the central Andes of Peru and Bolivia were used to estimate their coseismic displacements and late stage of interseismic strain accumulation. However, we focus our interest in central Peru-Lima region, which with its about 9’000,000 inhabitants is located over a locked plate interface that has not broken with magnitude Mw 8 earthquakes since May 1940, September 1966 and October 1974. We use a network of 11 GPS monuments to estimate the interseismic velocity field, infer spatial variations of interplate coupling and its relation with the background seismicity of the region.

  13. Tsunamigenic Aftershocks From Large Strike-Slip Earthquakes: An Example From the November 16, 2000 Mw=8.0 New Ireland, Papua New Guinea, Earthquake

    NASA Astrophysics Data System (ADS)

    Geist, E.; Parsons, T.; Hirata, K.; Hirata, K.

    2001-12-01

    Two reverse mechanism earthquakes (M > 7) were triggered by the November 16, 2000 Mw=8.0 New Ireland (Papua New Guinea) left-lateral, strike-slip earthquake. The mainshock rupture initiated in the Bismarck Sea and propagated unilaterally to the southeast through the island of New Ireland and into the Solomon Sea. Although the mainshock caused a local seiche in the bay near Rabaul (New Britain) with a maximum runup of 0.9 m, the main tsunami observed on the south coast of New Britain, New Ireland, and Bougainville (maximum runup approximately 2.5-3 m), appears to have been caused by the Mw=7.4 aftershock 2.8 hours following the mainshock. It is unclear whether the second Mw=7.6 aftershock on November 17, 2000 (40 hours after the mainshock) also generated a tsunami. Analysis and modeling of the available tsunami information can constrain the source parameters of the tsunamigenic aftershock(s) and further elucidated the triggering mechanism. Preliminary stress modeling indicates that because the location of the first Mw=7.4 aftershock is located near the rupture termination of the mainshock, stress calculations are especially sensitive to the location of both ruptures and the assumed coefficient of friction. A similar example of a triggered tsunamigenic earthquake occurred following the 1812 Wrightwood (M ~7.5) earthquake in southern California as discussed by Deng and Sykes (1996, GRL, p. 1155-1158). In this case, they show that strike-slip rupture on the San Andreas fault produced coseismic stress changes that triggered the Santa Barbara Channel earthquake (M ~7.1), 13 days later. The mechanism for the Santa Barbara Channel event appears to have been an oblique thrust event. The November 2000 New Ireland earthquake sequence provides an important analog for studying the potential for tsunamigenic aftershocks following large San Andreas earthquakes in southern California.

  14. Neotectonic architecture of Taiwan and its implications for future large earthquakes

    NASA Astrophysics Data System (ADS)

    Shyu, J. Bruce H.; Sieh, Kerry; Chen, Yue-Gau; Liu, Char-Shine

    2005-08-01

    The disastrous effects of the 1999 Chi-Chi earthquake in Taiwan demonstrated an urgent need for better knowledge of the island's potential earthquake sources. Toward this end, we have prepared a neotectonic map of Taiwan. The map and related cross sections are based upon structural and geomorphic expression of active faults and folds both in the field and on shaded relief maps prepared from a 40-m resolution digital elevation model, augmented by geodetic and seismologic data. The active tandem suturing and tandem disengagement of a volcanic arc and a continental sliver to and from the Eurasian continental margin have created two neotectonic belts in Taiwan. In the southern part of the orogen both belts are in the final stage of consuming oceanic crust. Collision and suturing occur in the middle part of both belts, and postcollisional collapse and extension dominate the island's northern and northeastern flanks. Both belts consist of several distinct neotectonic domains. Seven domains (Kaoping, Chiayi, Taichung, Miaoli, Hsinchu, Ilan, and Taipei) constitute the western belt, and four domains (Lutao-Lanyu, Taitung, Hualien, and Ryukyu) make up the eastern belt. Each domain is defined by a distinct suite of active structures. For example, the Chelungpu fault (source of the 1999 earthquake) and its western neighbor, the Changhua fault, are the principal components of the Taichung Domain, whereas both its neighboring domains, the Chiayi and Miaoli Domains, are dominated by major blind faults. In most of the domains the size of the principal active fault is large enough to produce future earthquakes with magnitudes in the mid-7 values.

  15. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  16. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  17. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  18. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  19. Short- and long-term earthquake triggering along the strike-slip Kunlun fault, China: Insights gained from the Ms 8.1 Kunlun earthquake and other modern large earthquakes

    NASA Astrophysics Data System (ADS)

    Xie, Chaodi; Lei, Xinglin; Wu, Xiaoping; Hu, Xionglin

    2014-03-01

    Following the 2001 Ms8.1 Kunlun earthquake, earthquake records of more than 10 years, in addition to more than one century's records of large earthquakes, provide us with a chance to examine short-term (days to a few years) and long-term (years to decades) seismic triggering following a magnitude ~ 8 continental earthquake along a very long strike-slip fault, the Kunlun fault system, located in northern Tibet, China. Based on the calculations of coseismic Coulomb stress changes (ΔCFS) from the larger earthquake and post-seismic stress changes due to viscoelastic stress relaxation in the lower crust and upper mantle, we examined the temporal evolution of seismic triggering. The ETAS (epidemic type aftershocks sequence) model shows that the seismic rate in the aftershock area over ~ 10 years was higher than the background seismicity before the mainshock. Moreover, we discuss long-term (years to decades) triggering and the evolution of stress changes for the sequence of five large earthquakes of M ≥ 7.0 that ruptured the Kunlun fault system since 1937. All subsequent events of M ≥ 7.0 occurred in the regions of positive accumulated ΔCFS. These results show that short-term (up to 200 days in our case) triggering along the strike-slip Kunlun fault is governed by coseismic stress changes, while long-term triggering is somewhat due to post-seismic Coulomb stress changes resulting from viscoelastic relaxation.

  20. Assessment of Ionospheric Anomaly Prior to the Large Earthquake: 2D and 3D Analysis in Space and Time for the 2011 Tohoku Earthquake (Mw9.0)

    NASA Astrophysics Data System (ADS)

    Hattori, Katsumi; Hirooka, Shinji; Han, Peng

    2016-04-01

    The ionospheric anomalies possibly associated with large earthquakes have been reported by many researchers. In this paper, Total Electron Content (TEC) and tomography analyses have been applied to investigate the spatial and temporal distributions of ionospheric electron density prior to the 2011 Off the Pacific Coast of Tohoku earthquake (Mw9.0). Results show significant TEC enhancements and an interesting three dimensional structure prior to the main shock. As for temporal TEC changes, the TEC value increases 3-4 days before the earthquake remarkably, when the geomagnetic condition was relatively quiet. In addition, the abnormal TEC enhancement area in space was stalled above Japan during the period. Tomographic results show that three dimensional distribution of electron density decreases around 250 km altitude above the epicenter (peak is located just the east-region of the epicenter) and increases the mostly entire region between 300 and 400 km.

  1. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  2. On the generation of large amplitude spiky solitons by ultralow frequency earthquake emission in the Van Allen radiation belt

    SciTech Connect

    Mofiz, U. A.

    2006-08-15

    The parametric coupling between earthquake emitted circularly polarized electromagnetic radiation and ponderomotively driven ion-acoustic perturbations in the Van Allen radiation belt is considered. A cubic nonlinear Schroedinger equation for the modulated radiation envelope is derived, and then solved analytically. For ultralow frequency earthquake emissions large amplitude spiky supersonic bright solitons or subsonic dark solitons are found to be generated in the Van Allen radiation belt, detection of which can be a tool for the prediction of a massive earthquake may be followed later.

  3. Source process of large (M~7) earthquakes in Japan Sea estimated from seismic waveforms and tsunami simulations

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Harada, T.; Satake, K.

    2014-12-01

    Inversion of teleseismic waveforms yielded fault parameters of four M~7 earthquakes occurred between 1963 and 1983 in Japan Sea. Tsunami waveforms were simulated based on those parameters and compared to the observed waveforms on tide gauges. Eastern margin of Japan Sea has been considered as a nascent plate boundary between the Eurasian and North American plates but not a typical subduction zone, hence the maximum magnitude (M<8) of earthquakes is smaller than those in the Pacific Ocean. Nevertheless, several large earthquakes with M > 7.5 in the last century caused seismic and tsunami damages, such as the 2007 Chuetsu-oki (Mw 6.6), 2007 Noto (Mw 6.7), 1993 South off Hokkaido (Mw 7.7), 1983 Japan Sea (Mw 7.7), 1964 Niigata (Ms 7.5), or 1940 Shakotan-oki (Mw 7.5) earthquakes. Detailed studies of source process were performed for these earthquakes. Smaller (M~7) earthquakes also cause seismic and tsunami damages if their hypocenters are near the land. However, there are few analyses for earthquakes around M7. Therefore, we study the characteristics of the M~7 earthquakes in Japan Sea. The earthquakes we studied are the 1983 West off Aomori (MJMA 7.1), 1971 West off Sakhalin (MJMA 6.9), 1964 off Oga peninsula (MJMA 6.9), and 1963 Offshore Cape Echizen (MJMA 6.9) earthquakes. From the teleseismic waveforms inversions, the reverse-fault mechanisms were obtained except for the 1963 earthquake which has the strike-slip-fault mechanism. The fault area is 900 km2, 2800 km2, 3600 km2, and 3600 km2, respectively. Tsunami numerical computations are made from the source models obtained by the teleseismic inversions. Tsunamis from the 1983 earthquake were recorded at 32 tide gauge stations along the Japan Sea. Amplitudes of the calculated tsunami waveforms are much smaller than the observations. For the 1971 earthquake, amplitudes of the calculated tsunami waveforms are also smaller than the observations at 18 tide gauge stations. For the 1964 earthquake, the amplitudes are

  4. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  5. I. Rupture properties of large subduction earthquakes. II. Broadband upper mantle structure of western North America

    NASA Astrophysics Data System (ADS)

    Melbourne, Timothy Ian

    This thesis contains two studies, one of which employs geodetic data bearing on large subduction earthquakes to infer complexity of rupture duration, and the other is a high frequency seismological study of the upper mantle discontinuity structure under western North America and the East Pacific Rise. In the first part, we present Global Positioning System and tide gauge data which record the co-seismic deformation which accompanied the 1995 Mw8.0 Jalisco event offshore central Mexico, the 1994 Mw7.5 Sanriku event offshore Northern Honshu, Japan, and the 1995 Mw8.1 Antofagasta earthquake offshore Northern Chile. In two of the three cases we find that the mainshocks were followed by significant amounts of rapid, post-seismic deformation which is best and most easily explained by continued slip near the co-seismic rupture patch. This is the first documented case of rapid slip migration following a large earthquake, and is pertinent to earthquake prediction based on precursory deformation. As the three GPS data sets represent the best observations of large subduction earthquakes to date and two of them show significant amounts of aseismic energy release, they strongly suggest silent faulting may be common in certain types of subduction zones. This, in turn, bears on estimates of global moment release, seismic coupling, and our understanding of the natural hazards associated with convergent margins. The second part of this dissertation utilizes high frequency body waves to infer the upper mantle structure of western North America and the East Pacific Rise. An uncharacteristically large Mw5.9 earthquake located in Western Texas provided a vivid topside reflection off the 410 Km velocity discontinuity ("410"), which we model to infer the fine details of this structure. We find that, contrary to conventional wisdom, the 410 is not sharp, and our results help reconcile seismic observations of 410 structure with laboratory predictions. By analyzing differences between our

  6. Large-scale tides in general relativity

    NASA Astrophysics Data System (ADS)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  7. Large-scale autostereoscopic outdoor display

    NASA Astrophysics Data System (ADS)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  8. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  9. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  10. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  11. Multi-Scale Structure and Earthquake Properties in the San Jacinto Fault Zone Area

    NASA Astrophysics Data System (ADS)

    Ben-Zion, Y.

    2014-12-01

    I review multi-scale multi-signal seismological results on structure and earthquake properties within and around the San Jacinto Fault Zone (SJFZ) in southern California. The results are based on data of the southern California and ANZA networks covering scales from a few km to over 100 km, additional near-fault seismometers and linear arrays with instrument spacing 25-50 m that cross the SJFZ at several locations, and a dense rectangular array with >1100 vertical-component nodes separated by 10-30 m centered on the fault. The structural studies utilize earthquake data to image the seismogenic sections and ambient noise to image the shallower structures. The earthquake studies use waveform inversions and additional time domain and spectral methods. We observe pronounced damage regions with low seismic velocities and anomalous Vp/Vs ratios around the fault, and clear velocity contrasts across various sections. The damage zones and velocity contrasts produce fault zone trapped and head waves at various locations, along with time delays, anisotropy and other signals. The damage zones follow a flower-shape with depth; in places with velocity contrast they are offset to the stiffer side at depth as expected for bimaterial ruptures with persistent propagation direction. Analysis of PGV and PGA indicates clear persistent directivity at given fault sections and overall motion amplification within several km around the fault. Clear temporal changes of velocities, probably involving primarily the shallow material, are observed in response to seasonal, earthquake and other loadings. Full source tensor properties of M>4 earthquakes in the complex trifurcation area include statistically-robust small isotropic component, likely reflecting dynamic generation of rock damage in the source volumes. The dense fault zone instruments record seismic "noise" at frequencies >200 Hz that can be used for imaging and monitoring the shallow material with high space and time details, and

  12. Relationship between large slip area and static stress drop of aftershocks of inland earthquake :Example of the 2007 Noto Hanto earthquake

    NASA Astrophysics Data System (ADS)

    Urano, S.; Hiramatsu, Y.; Yamada, T.

    2013-12-01

    The 2007 Noto Hanto earthquake (MJMA 6.9; hereafter referred to the main shock) occurred at 0:41(UTC) on March 25, 2007 at a depth of 11km beneath the west coast of Noto Peninsula, central Japan. The dominant slip of the main shock was on a reverse fault with a right-lateral slip and the large slip area was distributed from hypocenter to the shallow part on the fault plane (Horikawa, 2008). The aftershocks are distributed not only in the small slip area but also in the large slip area (Hiramatsu et al., 2011). In this study, we estimate static stress drops of aftershocks on the fault plane of the main shock. We discuss the relationship between the static stress drops of the aftershocks and the large slip area of the main shock by investigating spatial pattern of the values of the static stress drops. We use the waveform data obtained by the group for the joint aftershock observations of the 2007 Noto Hanto Earthquake (Sakai et al., 2007). The sampling frequency of the waveform data is 100 Hz or 200 Hz. Focusing on similar aftershocks reported by Hiramatsu et al. (2011), we analyze static stress drops by using the method of empirical Green's function (EGF) (Hough, 1997) as follows. The smallest earthquake (MJMA≥2.0) of each group of similar earthquakes is set to the EGF earthquake, and the largest earthquake (MJMA≥2.5) is set to the target earthquake. We then deconvolve the waveform of an interested earthquake with that of the EGF earthquake at each station and obtain the spectral ratio of the sources that cancels the propagation effects (path and site effects). Following the procedure of Yamada et al. (2010), we finally estimate static stress drops for P- and S-waves from corner frequencies of the spectral ratio by using a model of Madariaga (1976). The estimated average value of static stress drop is 8.2×1.3 MPa (8.6×2.2 MPa for P-wave and 7.8×1.3 MPa for S-wave). These values are coincident approximately with the static stress drop of aftershocks of other

  13. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  14. The Richter scale: its development and use for determining earthquake source parameters

    USGS Publications Warehouse

    Boore, D.M.

    1989-01-01

    The ML scale, introduced by Richter in 1935, is the antecedent of every magnitude scale in use today. The scale is defined such that a magnitude-3 earthquake recorded on a Wood-Anderson torsion seismometer at a distance of 100 km would write a record with a peak excursion of 1 mm. To be useful, some means are needed to correct recordings to the standard distance of 100 km. Richter provides a table of correction values, which he terms -log Ao, the latest of which is contained in his 1958 textbook. A new analysis of over 9000 readings from almost 1000 earthquakes in the southern California region was recently completed to redetermine the -log Ao values. Although some systematic differences were found between this analysis and Richter's values (such that using Richter's values would lead to underand overestimates of ML at distances less than 40 km and greater than 200 km, respectively), the accuracy of his values is remarkable in view of the small number of data used in their determination. Richter's corrections for the distance attenuation of the peak amplitudes on Wood-Anderson seismographs apply only to the southern California region, of course, and should not be used in other areas without first checking to make sure that they are applicable. Often in the past this has not been done, but recently a number of papers have been published determining the corrections for other areas. If there are significant differences in the attenuation within 100 km between regions, then the definition of the magnitude at 100 km could lead to difficulty in comparing the sizes of earthquakes in various parts of the world. To alleviate this, it is proposed that the scale be defined such that a magnitude 3 corresponds to 10 mm of motion at 17 km. This is consistent both with Richter's definition of ML at 100 km and with the newly determined distance corrections in the southern California region. Aside from the obvious (and original) use as a means of cataloguing earthquakes according

  15. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  16. Nowcasting earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Grant Ludwig, L.; Luginbuhl, M.; Gong, G.

    2016-11-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(n < n(t)) for the current count n(t) for the small earthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(n < n(t)). EPS is therefore the current level of hazard and assigns a number between 0% and 100% to every region so defined, thus providing a unique measure. Physically, the EPS corresponds to an estimate of the level of progress through the earthquake cycle in the defined region at the current time.

  17. Hidden Earthquakes.

    ERIC Educational Resources Information Center

    Stein, Ross S.; Yeats, Robert S.

    1989-01-01

    Points out that large earthquakes can take place not only on faults that cut the earth's surface but also on blind faults under folded terrain. Describes four examples of fold earthquakes. Discusses the fold earthquakes using several diagrams and pictures. (YP)

  18. Seismic Strong Motion Array Project (SSMAP) to Record Future Large Earthquakes in the Nicoya Peninsula area, Costa Rica

    NASA Astrophysics Data System (ADS)

    Simila, G.; Lafromboise, E.; McNally, K.; Quintereo, R.; Segura, J.

    2007-12-01

    The seismic strong motion array project (SSMAP) for the Nicoya Peninsula in northwestern Costa Rica is composed of 10 - 13 sites including Geotech A900/A800 accelerographs (three-component), Ref-Teks (three- component velocity), and Kinemetric Episensors. The main objectives of the array are to: 1) record and locate strong subduction zone mainshocks [and foreshocks, "early aftershocks", and preshocks] in Nicoya Peninsula, at the entrance of the Nicoya Gulf, and in the Papagayo Gulf regions of Costa Rica, and 2) record and locate any moderate to strong upper plate earthquakes triggered by a large subduction zone earthquake in the above regions. Our digital accelerograph array has been deployed as part of our ongoing research on large earthquakes in conjunction with the Earthquake and Volcano Observatory (OVSICORI) at the Universidad Nacional in Costa Rica. The country wide seismographic network has been operating continuously since the 1980's, with the first earthquake bulletin published more than 20 years ago, in 1984. The recording of seismicity and strong motion data for large earthquakes along the Middle America Trench (MAT) has been a major research project priority over these years, and this network spans nearly half the time of a "repeat cycle" (~ 50 years) for large (Ms ~ 7.5- 7.7) earthquakes beneath the Nicoya Peninsula, with the last event in 1950. Our long time co- collaborators include the seismology group OVSICORI, with coordination for this project by Dr. Ronnie Quintero and Mr. Juan Segura. The major goal of our project is to contribute unique scientific information pertaining to a large subduction zone earthquake and its related seismic activity when the next large earthquake occurs in Nicoya. We are now collecting a database of strong motion records for moderate sized events to document this last stage prior to the next large earthquake. A recent event (08/18/06; M=4.3) located 20 km northwest of Samara was recorded by two stations (Playa Carrillo

  19. Fracture energies at the rupture nucleation points of large strike-slip earthquakes on the Xianshuihe fault, southwestern China

    NASA Astrophysics Data System (ADS)

    Xie, Yuqing; Kato, Naoyuki

    2017-02-01

    Earthquake cycles along a pure strike-slip fault were numerically simulated using a rate- and state-dependent friction law to obtain the fracture energies at the rupture nucleation points. In the model, deep aseismic slip is imposed on the fault, which generates recurrent earthquakes in the shallower velocity-weakening friction region. The fracture energy at the rupture nucleation point for each simulated earthquake was calculated using the relation between shear stress and slip, which indicates slip-weakening behavior. The simulation results show that the relation between the fracture energy at the nucleation point and other source parameters is consistent with a theoretical approach based on fracture mechanics, in that an earthquake occurs when the energy release rate at the tip of the aseismic slip zone first exceeds the fracture energy. Because the energy release rate is proportional to the square of the amount of deep aseismic slip during the interseismic period, which can be estimated from the recurrence interval of earthquakes and the deep aseismic slip rate, the fracture energies for strike-slip earthquakes can be calculated. Using this result, we estimated the fracture energies at the nucleation points of large earthquakes on selected segments of the Xianshuihe fault, southwestern China. We find that the estimated fracture energies at the rupture nucleation points are generally smaller than the values of average fracture energy for developed ruptures as estimated in previous studies, suggesting that the fracture energy tends to increase with the rupture propagation distance.

  20. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  1. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  2. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  3. Synchronization of coupled large-scale Boolean networks

    NASA Astrophysics Data System (ADS)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  4. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  5. Three time scales of earthquake clustering inferred from in-situ 36Cl cosmogenic dating on the Velino-Magnola fault (Central Italy)

    NASA Astrophysics Data System (ADS)

    Schlagenhauf, A.; Manighetti, I.; Benedetti, L.; Gaudemer, Y.; Malavieille, J.; Finkel, R. C.; Pou, K.

    2010-12-01

    Using in-situ 36Cl cosmogenic exposure dating, we determine the earthquake slip release pattern over the last ~ 14 kyrs along one of the major active normal fault systems in Central Italy. The ~ 40 km-long Velino-Magnola fault (VMF) is located ~ 20 km SW from the epicenter of the devastating April 2009 l’Aquila earthquake. We sampled the VMF at five well-separated sites along its length, and modeled the 36Cl concentrations measured in the 400 samples (Schlagenhauf et al. 2010). We find that the fault has broken in large earthquakes which clustered at three different time scales -monthly, centennial and millennial. More precisely, the fault sustained phases of intense seismic activity, separated by ~ 3 kyr-long periods of relative quiescence. The phases of strong activity lasted 3-4 kyrs (millennial scale) and included 3-4 ‘rupture events’ that repeated every 0.5-1 kyr (centennial scale). Each of these ‘rupture events’ was likely a sequence of a few large earthquakes cascading in a very short time, a few months at most (monthly scale), to eventually break the entire VMF. Each earthquake apparently broke a section of the fault of 10-20 km and produced maximum surface displacements of 2-3.5 meters. The fault seems to enter a phase of intense activity when the accumulated strain reaches a specific threshold. Based on this observation, the Velino-Magnola fault seems presently in a stage of relative quiescence. Yet, it may soon re-enter a phase of paroxysmal seismic activity. If its forthcoming earthquakes are similar to those we have documented, several may occur in cascade over a short time, each with a magnitude up to 6.5-6.9. Seismic hazard is thus high in the Lazio-Abruzzo region, especially in the Fucino area. References: Schlagenhauf A., Y. Gaudemer, L. Benedetti, I. Manighetti, L. Palumbo, I. Schimmelpfennig, R. Finkel, and K. Pou (2010). Using in-situ Chlorine-36 cosmonuclide to recover past earthquake histories on limestone normal fault scarps: A

  6. Resolution and Trade-offs in Finite Fault Inversions for Large Earthquakes Using Teleseismic Signals (Invited)

    NASA Astrophysics Data System (ADS)

    Lay, T.; Ammon, C. J.

    2010-12-01

    An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.

  7. Characterizing Mega-Earthquake Related Tsunami on Subduction Zones without Large Historical Events

    NASA Astrophysics Data System (ADS)

    Williams, C. R.; Lee, R.; Astill, S.; Farahani, R.; Wilson, P. S.; Mohammed, F.

    2014-12-01

    Due to recent large tsunami events (e.g., Chile 2010 and Japan 2011), the insurance industry is very aware of the importance of managing its exposure to tsunami risk. There are currently few tools available to help establish policies for managing and pricing tsunami risk globally. As a starting point and to help address this issue, Risk Management Solutions Inc. (RMS) is developing a global suite of tsunami inundation footprints. This dataset will include both representations of historical events as well as a series of M9 scenarios on subductions zones that have not historical generated mega earthquakes. The latter set is included to address concerns about the completeness of the historical record for mega earthquakes. This concern stems from the fact that the Tohoku Japan earthquake was considerably larger than had been observed in the historical record. Characterizing the source and rupture pattern for the subduction zones without historical events is a poorly constrained process. In many case, the subduction zones can be segmented based on changes in the characteristics of the subducting slab or major ridge systems. For this project, the unit sources from the NOAA propagation database are utilized to leverage the basin wide modeling included in this dataset. The length of the rupture is characterized based on subduction zone segmentation and the slip per unit source can be determined based on the event magnitude (i.e., M9) and moment balancing. As these events have not occurred historically, there is little to constrain the slip distribution. Sensitivity tests on the potential rupture pattern have been undertaken comparing uniform slip to higher shallow slip and tapered slip models. Subduction zones examined include the Makran Trench, the Lesser Antilles and the Hikurangi Trench. The ultimate goal is to create a series of tsunami footprints to help insurers understand their exposures at risk to tsunami inundation around the world.

  8. Seismic imaging of structural heterogeneity in Earth's mantle: evidence for large-scale mantle flow.

    PubMed

    Ritsema, J; Van Heijst, H J

    2000-01-01

    Systematic analyses of earthquake-generated seismic waves have resulted in models of three-dimensional elastic wavespeed structure in Earth's mantle. This paper describes the development and the dominant characteristics of one of the most recently developed models. This model is based on seismic wave travel times and wave shapes from over 100,000 ground motion recordings of earthquakes that occurred between 1980 and 1998. It shows signatures of plate tectonic processes to a depth of about 1,200 km in the mantle, and it demonstrates the presence of large-scale structure throughout the lower 2,000 km of the mantle. Seismological analyses make it increasingly more convincing that geologic processes shaping Earth's surface are intimately linked to physical processes in the deep mantle.

  9. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly

  10. Structural Architecture of the Western Transverse Ranges and Potential for Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Levy, Y.; Rockwell, T. K.; Driscoll, N. W.; Shaw, J. H.; Kent, G. M.; Ucarkus, G.

    2015-12-01

    Understanding the subsurface structure of the Western Transverse Ranges (WTR) is critical to assess the seismic potential of large thrust faults comprising this fold-and-thrust belt. Several models have been advanced over the years, building on new data and understandings of thrust belt architecture, but none of these efforts have incorporated the full range of data, including style and rates of late Quaternary deformation in conjunction with surface geology, sub-surface well data and offshore seismic data. In our models, we suggest that the nearly continuous backbone with continuous stratigraphy of the Santa Ynez Mountains is explained by a large anticlinorium over a deep structural ramp, and that the current thrust front is defined by the southward-vergent Pitas Point-Ventura fault. The Ventura Avenue anticline and trend is an actively deforming fault propagation fold over the partially blind Pitas Point-Ventura fault. Details of how this fault is resolved to the surface are not well constrained, but any deformation model must account for the several back-thrusts that ride in the hanging wall of the thrust sheet, as well as the localized subsidence in Carpenteria and offshore Santa Barbara. Our preliminary starting model is a modification of a recently published model that invokes ramp-flat structure, with a deep ramp under the Santa Ynez Mountains, a shallower "flat" with considerable complexity in the hanging wall and a frontal ramp comprising the San Cayetano and Pitas Point thrusts. With the inferred deep ramp under the Santa Ynez Range, this model implies that large earthquakes may extend the entire length of the anticlinorium from Point Conception to eastern Ventura Basin, suggesting that the potential for a large earthquake is significantly higher then previously assumed.

  11. Seismic Strong Motion Array Project (SSMAP) to Record Future Large Earthquakes in the Nicoya Peninsula area, Costa Rica

    NASA Astrophysics Data System (ADS)

    Simila, G.; McNally, K.; Quintero, R.; Segura, J.

    2006-12-01

    The seismic strong motion array project (SSMAP) for the Nicoya Peninsula in northwestern Costa Rica is composed of 10 13 sites including Geotech A900/A800 accelerographs (three-component), Ref-Teks (three- component velocity), and Kinemetric Episensors. The main objectives of the array are to: 1) record and locate strong subduction zone mainshocks [and foreshocks, "early aftershocks", and preshocks] in Nicoya Peninsula, at the entrance of the Nicoya Gulf, and in the Papagayo Gulf regions of Costa Rica, and 2) record and locate any moderate to strong upper plate earthquakes triggered by a large subduction zone earthquake in the above regions. Our digital accelerograph array has been deployed as part of our ongoing research on large earthquakes in conjunction with the Earthquake and Volcano Observatory (OVSICORI) at the Universidad Nacional in Costa Rica. The country wide seismographic network has been operating continuously since the 1980's, with the first earthquake bulletin published more than 20 years ago, in 1984. The recording of seismicity and strong motion data for large earthquakes along the Middle America Trench (MAT) has been a major research project priority over these years, and this network spans nearly half the time of a "repeat cycle" (50 years) for large (Ms 7.5- 7.7) earthquakes beneath the Nicoya Peninsula, with the last event in 1950. Our long time co-collaborators include the seismology group OVSICORI, with coordination for this project by Dr. Ronnie Quintero and Mr. Juan Segura. Numerous international investigators are also studying this region with GPS and seismic stations (US, Japan, Germany, Switzerland, etc.). Also, there are various strong motion instruments operated by local engineers, for building purposes and mainly concentrated in the population centers of the Central Valley. The major goal of our project is to contribute unique scientific information pertaining to a large subduction zone earthquake and its related seismic activity when

  12. Large Historical Tsunamigenic Earthquakes in Italy: The Neglected Tsunami Research Point of View

    NASA Astrophysics Data System (ADS)

    Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.

    2015-12-01

    It is known that tsunamis are rather rare events, especially when compared to earthquakes, and the Italian coasts are no exception. Nonetheless, a striking evidence is that 6 out of 10 earthquakes occurred in the last thousand years in Italy, and having equivalent moment magnitude equal or larger than 7 where accompanied by destructive or heavily damaging tsunamis. If we extend the lower limit of the equivalent moment magnitude down to 6.5 the percentage decreases (around 40%), but is still significant. Famous events like those occurred on 30 July 1627 in Gargano, on 11 January 1693 in eastern Sicily, and on 28 December 1908 in the Messina Straits are part of this list: they were all characterized by maximum run-ups of several meters (13 m for the 1908 tsunami), significant maximum inundation distances, and large (although not precisely quantifiable) numbers of victims. Further evidences provided in the last decade by paleo-tsunami deposit analyses help to better characterize the tsunami impact and confirm that none of the cited events can be reduced to local or secondary effects. Proper analysis and simulation of available tsunami data would then appear as an obvious part of the correct definition of the sources responsible for the largest Italian tsunamigenic earthquakes, in a process in which different datasets analyzed by different disciplines must be reconciled rather than put into contrast with each other. Unfortunately, macroseismic, seismic and geological/geomorphological observations and data typically are assigned much heavier weights, and in-land faults are often assigned larger credit than the offshore ones, even when evidence is provided by tsunami simulations that they are not at all capable of justifying the observed tsunami effects. Tsunami generation is imputed a-priori to only supposed, and sometimes even non-existing, submarine landslides. We try to summarize the tsunami research point of view on the largest Italian historical tsunamigenic

  13. The most recent large earthquake on the Rodgers Creek fault, San Francisco bay area

    USGS Publications Warehouse

    Hecker, S.; Pantosti, D.; Schwartz, D.P.; Hamilton, J.C.; Reidy, L.M.; Powers, T.J.

    2005-01-01

    The Rodgers Creek fault (RCF) is a principal component of the San Andreas fault system north of San Francisco. No evidence appears in the historical record of a large earthquake on the RCF, implying that the most recent earthquake (MRE) occurred before 1824, when a Franciscan mission was built near the fault at Sonoma, and probably before 1776, when a mission and presidio were built in San Francisco. The first appearance of nonnative pollen in the stratigraphic record at the Triangle G Ranch study site on the south-central reach of the RCF confirms that the MRE occurred before local settlement and the beginning of livestock grazing. Chronological modeling of earthquake age using radiocarbon-dated charcoal from near the top of a faulted alluvial sequence at the site indicates that the MRE occurred no earlier than A.D. 1690 and most likely occurred after A.D. 1715. With these age constraints, we know that the elapsed time since the MRE on the RCF is more than 181 years and less than 315 years and is probably between 229 and 290 years. This elapsed time is similar to published recurrence-interval estimates of 131 to 370 years (preferred value of 230 years) and 136 to 345 years (mean of 205 years), calculated from geologic data and a regional earthquake model, respectively. Importantly, then, the elapsed time may have reached or exceeded the average recurrence time for the fault. The age of the MRE on the RCF is similar to the age of prehistoric surface rupture on the northern and southern sections of the Hayward fault to the south. This suggests possible rupture scenarios that involve simultaneous rupture of the Rodgers Creek and Hayward faults. A buried channel is offset 2.2 (+ 1.2, - 0.8) m along one side of a pressure ridge at the Triangle G Ranch site. This provides a minimum estimate of right-lateral slip during the MRE at this location. Total slip at the site may be similar to, but is probably greater than, the 2 (+ 0.3, - 0.2) m measured previously at the

  14. The Viscoelastic Effect of Triggered Earthquakes in Various Tectonic Regions On a Global Scale

    NASA Astrophysics Data System (ADS)

    Sunbul, F.

    2015-12-01

    The relation between static stress changes and earthquake triggering has important implications for seismic hazard analysis. Considering long time difference between triggered events, viscoelastic stress transfer plays an important role in stress accumulation along the faults. Developing a better understanding of triggering effects may contribute to improvement of quantification of seismic hazard in tectonically active regions. Parsons (2002) computed the difference between the rate of earthquakes occurring in regions where shear stress increased and those regions where the shear stress decreased on a global scale. He found that 61% of the earthquakes occurred in regions with a shear stress increase, while 39% of events occurred in areas of shear stress decrease. Here, we test whether the inclusion of viscoelastic stress transfer affects the results obtained by Parsons (2002) for static stress transfer. Doing such a systematic analysis, we use Global Centroid Moment Tensor (CMT) catalog selecting 289 Ms>7 main shocks with their ~40.500 aftershocks located in ±2° circles for 5 years periods. For the viscoelastic post seismic calculations, we adapt 12 different published rheological models for 5 different tectonic regions. In order to minimise the uncertainties in this CMT catalog, we use the Frohlich and Davis (1999) statistical approach simultaneously. Our results shows that the 5590 aftershocks are triggered by the 289 Ms>7 earthquakes. 3419 of them are associated with calculated shear stress increase, while 2171 are associated with shear stress decrease. The summation of viscoelastic stress shows that, of the 5840 events, 3530 are associated with shear stress increases, and 2312 with shear stress decrease. This result shows an average 4.5% increase in total, the rate of increase in positive and negative areas are 3.2% and 6.5%, respectively. Therefore, over long time periods viscoelastic relaxation represents a considerable contribution to the total stress on

  15. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  16. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  17. Large-Scale Processing of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Finn, John; Sridhar, K. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1998-01-01

    Scale-up difficulties and high energy costs are two of the more important factors that limit the availability of various types of nanotube carbon. While several approaches are known for producing nanotube carbon, the high-powered reactors typically produce nanotubes at rates measured in only grams per hour and operate at temperatures in excess of 1000 C. These scale-up and energy challenges must be overcome before nanotube carbon can become practical for high-consumption structural and mechanical applications. This presentation examines the issues associated with using various nanotube production methods at larger scales, and discusses research being performed at NASA Ames Research Center on carbon nanotube reactor technology.

  18. Repetition of large stress drop earthquakes on Wairarapa fault, New Zealand, revealed by LiDAR data

    NASA Astrophysics Data System (ADS)

    Delor, E.; Manighetti, I.; Garambois, S.; Beaupretre, S.; Vitard, C.

    2013-12-01

    We have acquired high-resolution LiDAR topographic data over most of the onland trace of the 120 km-long Wairarapa strike-slip fault, New Zealand. The Wairarapa fault broke in a large earthquake in 1855, and this historical earthquake is suggested to have produced up to 18 m of lateral slip at the ground surface. This would make this earthquake a remarkable event having produced a stress drop much higher than commonly observed on other earthquakes worldwide. The LiDAR data allowed us examining the ground surface morphology along the fault at < 50 cm resolution, including in the many places covered with vegetation. In doing so, we identified more than 900 alluvial features of various natures and sizes that are clearly laterally offset by the fault. We measured the about 670 clearest lateral offsets, along with their uncertainties. Most offsets are lower than 100 m. Each measurement was weighted by a quality factor that quantifies the confidence level in the correlation of the paired markers. Since the slips are expected to vary along the fault, we analyzed the measurements in short, 3-5 km-long fault segments. The PDF statistical analysis of the cumulative offsets per segment reveals that the alluvial morphology has well recorded, at every step along the fault, no more than a few (3-6), well distinct cumulative slips, all lower than 80 m. Plotted along the entire fault, the statistically defined cumulative slip values document four, fairly continuous slip profiles that we attribute to the four most recent large earthquakes on the Wairarapa fault. The four slip profiles have a roughly triangular and asymmetric envelope shape that is similar to the coseismic slip distributions described for most large earthquakes worldwide. The four slip profiles have their maximum slip at the same place, in the northeastern third of the fault trace. The maximum slips vary from one event to another in the range 7-15 m; the most recent 1855 earthquake produced a maximum coseismic slip

  19. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    NASA Astrophysics Data System (ADS)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  20. Large scale structure from viscous dark matter

    SciTech Connect

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim E-mail: stefan.floerchinger@cern.ch E-mail: ntetrad@phys.uoa.gr

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale k{sub m} for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale k{sub m}, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  1. Large scale structure from viscous dark matter

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  2. Global Behavior in Large Scale Systems

    DTIC Science & Technology

    2013-12-05

    Sinopoli, and J. Moura, “ Distributed detection over time varying networks: Large deviations analysis,” in Communication, Control , and Computing...and J. Moura, “ Distributed detection via gaussian running consensus : Large deviations asymptotic analysis,” Signal Processing, IEEE Transactions on... Distributed detection via Gaussian running consensus : large deviations asymptotic analysis,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp

  3. Earthquakes and their influence on the large aquatic ecosystems (taking Lake Sevan as an example).

    NASA Astrophysics Data System (ADS)

    Gulakyan, S.; Wilkinson, I.

    2003-04-01

    Lake dynamic and earthquakes. The model includes hydrothermodynamic equations and boundary conditions at the surface and on the lake bed, include configuration and depth of the lake, the temperature regime, wind velocity and direction, etc. Ground motion during an earthquake is variable, but dependent on the sediment type. Here are considered the following questions: · Earthquakes and hypolimnion, stratification . · Inertia waves, aftershocks of earthquakes and lake dynamic. · Earthquakes and ground water circulation in water ecosystems. · Earthquakes and water chemistry of an ecosystem ( phosphorous, calcium carbonate, gases and etc). · Earthquakes and benthos species. These models implemented for the data of Lake Sevan (Armenia). We acknowledge, with thanks, NATO for supporting this program through Linkage Grant No. 9-975530.

  4. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  5. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  6. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  7. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  8. Study of the ionospheric response to surface waves generated by large earthquakes

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Lognonné, Ph.; Occhipinti, G.; Kherani, E. A.; Munekane, H.

    2009-04-01

    Global Positioning System (GPS) is a powerful technique for monitoring the ionosphere. The quantity measured is the Total Electronic Content (TEC), that is the integrated quantity of the electronic density along the satellite - beacon ray path. Dense GPS networks like the Japanese network GEONET provide the proper sampling (~ 10 km) for imaging seismogenic ionospheric waves (whose wavelengths are hundreds of kilometers) finely. In instrumented regions, changes in the surrounding GPS-TEC map are observed for almost all the large earthquakes (magnitude greater than 6) within ten minutes after the rupture. This time corresponds to the propagation delay needed by the forced atmospheric waves to reach the ionospheric altitudes. In some favourable configurations, an integrated, a "seismo-ionospheric" radiative pattern is visualized. In the vicinity of a large earthquake the measured ionospheric seismic waves are two kinds : a slow wave at near field and a faster one at near and far field. They are easily identified by their group velocity, about 1 km/s and 3.4 km/s respectively. The first kind is directly relied to the acoustic pulse generated by the vertical displacements of the source. It takes the form of a plume, easily modelled by a ray tracing technique. Following this model, source informations like the fault spatial distribution can be extracted as illustrated by the case of the Sumatra giant earthquake of 200,4 December, 26th [Heki et al., 2006]. The wave trains of the second kind are excited by the Rayleigh surface waves up to teleseismic distance. We are aiming here to investigate the potential of extracting informations on the source from this last type of waves. Their coupling with the atmosphere is very efficient and we observed at ionospheric level an overlap of the sound wave by the surface waves signal just after the Tokachi-Oki - 2003, September 25th Mw=8.1 earthquake. 3D synthetics of the atmospheric waves generated by surface waves are modelled by a

  9. Modifications of the ionosphere prior to large earthquakes: report from the Ionosphere Precursor Study Group

    NASA Astrophysics Data System (ADS)

    Oyama, K.-I.; Devi, M.; Ryu, K.; Chen, C. H.; Liu, J. Y.; Liu, H.; Bankov, L.; Kodama, T.

    2016-12-01

    The current status of ionospheric precursor studies associated with large earthquakes (EQ) is summarized in this report. It is a joint endeavor of the "Ionosphere Precursor Study Task Group," which was formed with the support of the Mitsubishi Foundation in 2014-2015. The group promotes the study of ionosphere precursors (IP) to EQs and aims to prepare for a future EQ dedicated satellite constellation, which is essential to obtain the global morphology of IPs and hence demonstrate whether the ionosphere can be used for short-term EQ predictions. Following a review of the recent IP studies, the problems and specific research areas that emerged from the one-year project are described. Planned or launched satellite missions dedicated (or suitable) for EQ studies are also mentioned.

  10. Earthquake source scaling and self-similarity estimation from stacking P and S spectra

    NASA Astrophysics Data System (ADS)

    Prieto, GermáN. A.; Shearer, Peter M.; Vernon, Frank L.; Kilb, Debi

    2004-08-01

    We study the scaling relationships of source parameters and the self-similarity of earthquake spectra by analyzing a cluster of over 400 small earthquakes (ML = 0.5 to 3.4) recorded by the Anza seismic network in southern California. We compute P, S, and preevent noise spectra from each seismogram using a multitaper technique and approximate source and receiver terms by iteratively stacking the spectra. To estimate scaling relationships, we average the spectra in size bins based on their relative moment. We correct for attenuation by using the smallest moment bin as an empirical Green's function (EGF) for the stacked spectra in the larger moment bins. The shapes of the log spectra agree within their estimated uncertainties after shifting along the ω-3 line expected for self-similarity of the source spectra. We also estimate corner frequencies and radiated energy from the relative source spectra using a simple source model. The ratio between radiated seismic energy and seismic moment (proportional to apparent stress) is nearly constant with increasing moment over the magnitude range of our EGF-corrected data (ML = 1.8 to 3.4). Corner frequencies vary inversely as the cube root of moment, as expected from the observed self-similarity in the spectra. The ratio between P and S corner frequencies is observed to be 1.6 ± 0.2. We obtain values for absolute moment and energy by calibrating our results to local magnitudes for these earthquakes. This yields a S to P energy ratio of 9 ± 1.5 and a value of apparent stress of about 1 MPa.

  11. Consistency and factorial invariance of the Davidson trauma scale in heterogeneous populations: results from the 2010 Chilean earthquake.

    PubMed

    Kevan, Bryan

    2016-07-25

    This investigation seeks to validate an application of a standardized post-traumatic stress symptom self-report survey, the Davidson Trauma Scale (DTS), with a large, heterogeneous population of earthquake victims. While previous studies have focused primarily on small samples, this investigation uses a unique dataset to assess the validity of this application of the DTS while accounting for heterogeneity and sample size. We use concurrent validity and reliability analysis tests to confirm the validity of the scale. Further, confirmatory factor analysis is used to test the fit of the data's factor structure against previously established trauma models. Finally, these fit tests are repeated across different mutually exclusive vulnerability subsets of the data in order to investigate how the invariance of the scale is affected by sample heterogeneity. We find that this particular application of the scale is, on the whole, reliable and valid, showing good concurrent validity. However, evidence of variability is found across specific vulnerability subsets, indicating that a heterogeneous sample can have a measurable impact on model fit. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  13. Real or virtual large-scale structure?

    PubMed Central

    Evrard, August E.

    1999-01-01

    Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own. PMID:10200243

  14. Algorithms for Large-Scale Astronomical Problems

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Modern astronomical datasets are getting larger and larger, which already include...of data, which raises the following question: How can we use modern computer science techniques to help astronomers better analyze large datasets? To...indexing and sorting further reduce the processing time of user queries.  We processed large data using modern distributed computing frameworks

  15. Paleoseismicity of the Intermountain Seismic Belt from Late Quaternary faulting and parameter scaling of normal faulting earthquakes

    SciTech Connect

    Mason, D.B.; Smith, R.B. . Dept. of Geology and Geophysics)

    1993-04-01

    The eastern Basin-Range, 1,300 km-long Intermountain Seismic Belt (ISB) is reflected by a [approximately]100 km-wide zone of scattered earthquakes that in general do not correlate with the mapped Quaternary faults. Yet this region has experienced two of the largest historic earthquakes in the western US, the M[sub S] = 7.3, Borah Peak, Idaho, and the M[sub S] = 7.5, Hebgen Lake, Montana, events, which occurred in areas that had previously low historical seismicity. These observations indicate the lack of spatial and temporal uniformity between the historical and Holocene seismic record. The authors have studied this problem by first investigating fault-magnitude scaling relationships using a global set of 16 large normal- to oblique-slip earthquakes, then applying the scaling laws to data from a compilation of well studied Late Quaternary faults of the ISB. Several regression models were evaluated but the authors found that magnitudes predicted by displacement alone were consistently 20% larger than those determined from lengths. They suggest that the best estimator is given by: M[sub S] = 0.47 log (d[sub sM]L[sub s]) + 6.1. These results revealed at least 24 large multiple-segment, paleoearthquakes, 6.3 [le] M[sub s] [le] 7.3, that were associated with faults within the dual-branched seismicity belt which surrounds the aseismic Snake River Plain in the central ISB. They believe this unusual bow-wave pattern of seismicity and faulting is related to plume-plate interaction associated with the Yellowstone hotspot with an additional component of concomitant Basin-Range extension. In the southern ISB, the 370 km-long Wasatch fault, Utah, experienced at least 7 multiple-segment paleoearthquakes, 7.1 [le] M[sub s] [le] 7.3, and contrasts with a historic record of seismic quiescence. Intraplate crustal extension is though to be the primary mode of regional strain release for this region of the ISB.

  16. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  17. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  18. Modified gravity and large scale flows, a review

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  19. Slip in the 1857 and Earlier Large Earthquakes Along the Carrizo Plain, San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; Arrowsmith, J. Ramón; Ludwig, Lisa Grant; Akçiz, Sinan O.

    2010-02-01

    The moment magnitude (Mw) 7.9 Fort Tejon earthquake of 1857, with a ~350-kilometer-long surface rupture, was the most recent major earthquake along the south-central San Andreas Fault, California. Based on previous measurements of its surface slip distribution, rupture along the ~60-kilometer-long Carrizo segment was thought to control the recurrence of 1857-like earthquakes. New high-resolution topographic data show that the average slip along the Carrizo segment during the 1857 event was 5.3 ± 1.4 meters, eliminating the core assumption for a linkage between Carrizo segment rupture and recurrence of major earthquakes along the south-central San Andreas Fault. Earthquake slip along the Carrizo segment may recur in earthquake clusters with cumulative slip of ~5 meters.

  20. Slip in the 1857 and earlier large earthquakes along the Carrizo Plain, San Andreas Fault.

    PubMed

    Zielke, Olaf; Arrowsmith, J Ramón; Grant Ludwig, Lisa; Akçiz, Sinan O

    2010-02-26

    The moment magnitude (Mw) 7.9 Fort Tejon earthquake of 1857, with a approximately 350-kilometer-long surface rupture, was the most recent major earthquake along the south-central San Andreas Fault, California. Based on previous measurements of its surface slip distribution, rupture along the approximately 60-kilometer-long Carrizo segment was thought to control the recurrence of 1857-like earthquakes. New high-resolution topographic data show that the average slip along the Carrizo segment during the 1857 event was 5.3 +/- 1.4 meters, eliminating the core assumption for a linkage between Carrizo segment rupture and recurrence of major earthquakes along the south-central San Andreas Fault. Earthquake slip along the Carrizo segment may recur in earthquake clusters with cumulative slip of approximately 5 meters.

  1. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  2. Acoustic Emission Patterns and the Transition to Ductility in Sub-Micron Scale Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Ghaffari, H.; Xia, K.; Young, R.

    2013-12-01

    We report observation of a transition from the brittle to ductile regime in precursor events from different rock materials (Granite, Sandstone, Basalt, and Gypsum) and Polymers (PMMA, PTFE and CR-39). Acoustic emission patterns associated with sub-micron scale laboratory earthquakes are mapped into network parameter spaces (functional damage networks). The sub-classes hold nearly constant timescales, indicating dependency of the sub-phases on the mechanism governing the previous evolutionary phase, i.e., deformation and failure of asperities. Based on our findings, we propose that the signature of the non-linear elastic zone around a crack tip is mapped into the details of the evolutionary phases, supporting the formation of a strongly weak zone in the vicinity of crack tips. Moreover, we recognize sub-micron to micron ruptures with signatures of 'stiffening' in the deformation phase of acoustic-waveforms. We propose that the latter rupture fronts carry critical rupture extensions, including possible dislocations faster than the shear wave speed. Using 'template super-shear waveforms' and their network characteristics, we show that the acoustic emission signals are possible super-shear or intersonic events. Ref. [1] Ghaffari, H. O., and R. P. Young. "Acoustic-Friction Networks and the Evolution of Precursor Rupture Fronts in Laboratory Earthquakes." Nature Scientific reports 3 (2013). [2] Xia, Kaiwen, Ares J. Rosakis, and Hiroo Kanamori. "Laboratory earthquakes: The sub-Rayleigh-to-supershear rupture transition." Science 303.5665 (2004): 1859-1861. [3] Mello, M., et al. "Identifying the unique ground motion signatures of supershear earthquakes: Theory and experiments." Tectonophysics 493.3 (2010): 297-326. [4] Gumbsch, Peter, and Huajian Gao. "Dislocations faster than the speed of sound." Science 283.5404 (1999): 965-968. [5] Livne, Ariel, et al. "The near-tip fields of fast cracks." Science 327.5971 (2010): 1359-1363. [6] Rycroft, Chris H., and Eran Bouchbinder

  3. Practical guidelines to select and scale earthquake records for nonlinear response history analysis of structures

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2010-01-01

    Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  4. A regional surface wave magnitude scale for the earthquakes of Russia's Far East

    NASA Astrophysics Data System (ADS)

    Chubarova, O. S.; Gusev, A. A.

    2017-01-01

    The modified scale M s(20R) is developed for the magnitude classification of the earthquakes of Russia's Far East based on the surface wave amplitudes at regional distances. It extends the applicability of the classical Gutenberg scale M s(20) towards small epicentral distances (0.7°-20°). The magnitude is determined from the amplitude of the signal that is preliminarily bandpassed to extract the components with periods close to 20 s. The amplitude is measured either for the surface waves or, at fairly short distances of 0.7°-3°, for the inseparable wave group of the surface and shear waves. The main difference of the M s(20R) scale with the traditional M s(BB) Soloviev-Vanek scale is its firm spectral anchoring. This approach practically eliminated the problem of the significant (up to-0.5) regional and station anomalies characteristic of the M s(BB) scale in the conditions of the Far East. The absence of significant station and regional anomalies, as well as the strict spectral anchoring, make the M s(20R) scale advantageous when used for prompt decision making in tsunami warnings for the coasts of Russia's Far East.

  5. Migration of large earthquakes along the San Jacinto Fault; Stress diffusion from the 1857 Fort Tejon Earthquake

    NASA Astrophysics Data System (ADS)

    Rydelek, Paul A.; Sacks, I. Selwyn

    Historic and modern catalogs of seismicity in California suggest a migration of earthquakes (M ≥ 5.6) along the San Jacinto Fault; these events appear to travel down the fault with a migration speed of 1.7 km/year (Sanders [1993]). This migration is explained by postseismic strain diffusion due to viscoelastic relaxation from the great Fort Tejon earthquake in 1857. We model this postseismic effect and find that significant stress diffuses down the San Jacinto fault for distances in excess of 200 km and the corresponding migration may be a result of Coulomb triggering from this stress perturbation. The level of postseismic stress that seems to be the trigger level for most of the events is of order 1 bar. Since the temporal evolution of the postseismic strain field is mainly dependent on the inelastic properties of the lower crust and uppermost mantle, the observed migration enables a viscosity estimate of ˜4 × 1018 Pas for this region of California.

  6. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  7. Large scale scientific computing - future directions

    NASA Astrophysics Data System (ADS)

    Patterson, G. S.

    1982-06-01

    Every new generation of scientific computers has opened up new areas of science for exploration through the use of more realistic numerical models or the ability to process ever larger amounts of data. Concomitantly, scientists, because of the success of past models and the wide range of physical phenomena left unexplored, have pressed computer designers to strive for the maximum performance that current technology will permit. This encompasses not only increased processor speed, but also substantial improvements in processor memory, I/O bandwidth, secondary storage and facilities to augment the scientist's ability both to program and to understand the results of a computation. Over the past decade, performance improvements for scientific calculations have come from algoeithm development and a major change in the underlying architecture of the hardware, not from significantly faster circuitry. It appears that this trend will continue for another decade. A future archetectural change for improved performance will most likely be multiple processors coupled together in some fashion. Because the demand for a significantly more powerful computer system comes from users with single large applications, it is essential that an application be efficiently partitionable over a set of processors; otherwise, a multiprocessor system will not be effective. This paper explores some of the constraints on multiple processor architecture posed by these large applications. In particular, the trade-offs between large numbers of slow processors and small numbers of fast processors is examined. Strategies for partitioning range from partitioning at the language statement level (in-the-small) and at the program module level (in-the-large). Some examples of partitioning in-the-large are given and a strategy for efficiently executing a partitioned program is explored.

  8. Modeling of slow slip events and their interaction with large earthquakes along the subduction interfaces beneath Guerrero and Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Shibazaki, B.; Cotton, F.; Matsuzawa, T.

    2013-12-01

    Recent high-resolution geodetic observations have revealed the occurrence of slow slip events (SSEs) along the Mexican subduction zone. In the Guerrero gap, large SSEs of around Mw 7.5 repeat every 3-4 years (Lowry et al., 2001; Kostoglodov et al., 2003; Radiguet et al., 2012). The 2006 Guerrero slow slip was analyzed in detail (Radiguet et al., 2011): the average velocity of propagation was 0.8 km/day, and the maximum slip velocity was 1.0E-8 m/s. On the other hand, in the Oaxaca region, SSEs of Mw 7.0-7.3 repeat every 1-2 years and last for 3 months (Brudzinski et al., 2007; Correa-Mora et al., 2008). These SSEs in the Mexican subduction zone are categorized as long-term (long-duration) SSEs; however, their recurrence interval is relatively short. It is important to investigate how SSEs in Mexico can be reproduced using a theoretical model and determine the difference in friction law parameters when compared to SSEs in other subduction zones. An Mw 7.4 subduction earthquake occurred beneath the Oaxaca-Guerrero border on March 20, 2012. The 2012 SSE coincided with this thrust earthquake (Graham et al., 2012). SSEs in Mexico can trigger large earthquakes because their magnitudes are close to that of earthquakes. The interaction between SSEs and large earthquakes is an important problem, which needs to be investigated. We model SSEs and large earthquakes along the subduction interfaces beneath Guerrero and Oaxaca. To reproduce SSEs, we use a rate- and state-dependent friction law with a small cut-off velocity for the evolution effect based on the model proposed by Shibazaki and Shimamoto (2007). We also consider the 3D plate interface, which dips at a very shallow angle at a horizontal distance of 50-150 km from the trench. We set the unstable zone from a depth of 10 to 20 km. By referring to analytical results, we set a Guerrero SSE zone, which extends to the shallow Guerrero gap. Because the maximum slip velocity is around 1.0E-8 m/s, we set the cut-off velocity

  9. Large-scale preparation of plasmid DNA.

    PubMed

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  10. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2009-09-30

    9 October and lasting until 1500 UTC, 11 October. (bottom) COAMPS forecast of visibility for the same period showing a dust storm with a similar...starting time and an ending time of 0900 UTC 12 October. (Walker et al., 2009.) 6 7 Figure 3. Comparison of COAMPS dust storm forecast...forecasts of dust storms in areas downwind of the large deserts of the world: Arabian Gulf, Sea of Japan, China Sea, Mediterranean Sea, and the Tropical

  11. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2010-09-30

    advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large deserts of the world... dust source regions in NAAPS. The DSD has been crucial for high-resolution dust forecasting in SW Asia using COAMPS (Walker et al., 2009). Dust ...6 Figure 2. Four-panel product used to compare multiple model forecasts of visibility in SW Asia dust storms . On the web the product is

  12. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large...in FY08. NAAPS forecasts of CONUS dust storms and long-range dust transport to CONUS were further evaluated in collaboration with CSU. These...visibility. The regional model ( COAMPS /Aerosol) became operational during OIF. The global model Navy Aerosol Analysis and Prediction System (NAAPS

  13. Seismic hazard assessment based on the Unified Scaling Law for Earthquakes: the Greater Caucasus

    NASA Astrophysics Data System (ADS)

    Nekrasova, A.; Kossobokov, V. G.

    2015-12-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. The parameters A, B, and C of USLE are used to estimate, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters including macro-seismic intensity. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks (e.g., those based on the density of exposed population). The methodology of seismic hazard and risks assessment based on USLE is illustrated by application to the seismic region of Greater Caucasus.

  14. Near-Source Recordings of Small and Large Earthquakes: Magnitude Predictability only for Medium and Small Events

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2015-12-01

    The feasibility of Earthquake Early Warning (EEW) applications has revived the discussion on whether earthquake rupture development follows deterministic principles or not. If it does, it may be possible to predict final earthquake magnitudes while the rupture is still developing. EEW magnitude estimation schemes, most of which are based on 3-4 seconds of near-source p-wave data, have been shown to work well for small to moderate size earthquakes. In this magnitude range, the used time window is larger than the source durations of the events. Whether the magnitude estimation schemes also work for events in which the source duration exceeds the estimation time window, however, remains debated. In our study we have compiled an extensive high-quality data set of near-source seismic recordings. We search for waveform features that could be diagnostic of final event magnitudes in a predictive sense. We find that the onsets of large (M7+) events are statistically indistinguishable from those of medium sized events (M5.5-M7). Significant differences arise only once the medium size events terminate. This observation suggests that EEW relevant magnitude estimates are largely observational, rather than predictive, and that whether a medium size event becomes a large one is not determined at the rupture onset. As a consequence, early magnitude estimates for large events are minimum estimates, a fact that has to be taken into account in EEW alert messaging and response design.

  15. The large earthquake of 8 August 1303 in Crete: seismic scenario and tsunami in the Mediterranean area

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Comastri, Alberto

    By conducting a historical review of this large seismic event in the Mediterranean, it has been possible to identify both the epicentral area and the area in which its effects were principally felt. Ever since the nineteenth century, the seismological tradition has offered a variety of partial interpretations of the earthquake, depending on whether the main sources used were Arabic, Greek or Latin texts. Our systematic research has involved the analysis not only of Arab, Byzantine and Italian chronicle sources, but also and in particular of a large number of never previously used official and public authority documents, preserved in Venice in the State Archive, in the Marciana National Library and in the Library of the Museo Civico Correr. As a result, it has been possible to establish not only chronological parameters for the earthquake (they were previously uncertain) but also its overall effects (epicentral area in Crete, Imax XI MCS). Sources containing information in 41 affected localities and areas were identified. The earthquake also gave rise to a large tsunami, which scholars have seen as having certain interesting elements in common with that of 21 July 365, whose epicentre was also in Crete. As regards methodology, this research made it clear that knowledge of large historical earthquakes in the Mediterranean is dependent upon developing specialised research and going beyond the territorial limits of current national catalogues.

  16. Micro-Scale Anatomy of the 1999 Chi-Chi Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Boullier, A.-M.; Yeh, E.-C.; Boutareaud, S.; Song, S.-R.; Tsai, C.-H.

    2009-04-01

    Two TCDP bore-holes A and B were drilled in the northern part of the Chelungpu thrust fault where the Chi-Chi earthquake (September 21, 1999, Mw 7.6) showed large displacement, low ground acceleration and high slip velocity. In this paper, we describe the microstructures of the Chi-Chi Principal Slip Zone (PSZ) within black gouges localized at 1111m depth in Hole A and at 1136m depth in Hole B. In the FZA1111 the PSZ is a 2 cm-thick isotropic clay-rich gouge which contains aggregates formed by central clasts coated by clay cortex (Clay Clast Aggregates, CCAs), and fragments of older gouges segregated in the top third of the PSZ. In FZB1136 the PSZ is 3 mm-thick and is characterized by a foliated gouge displaying an alternation of clay-rich and clast-rich layers. The presence of CCAs, plucked underlying gouge fragments, gouge injections, and the occurrence of reverse grain size segregation of large clasts in the FZA1111 isotropic gouge suggest that the gouge was fluidized as a result of frictional heating and thermal pressurization. The foliated gouge in FZB1136 may be one locus of strain localization and related heat production. Small calcite veins present above the isotropic FZA1111 PSZ gouge, and characterized by an increasing strain with increasing distance away from the PSZ, are attributed to co-seismic fluid escape from the pressurized gouge. The observed microstructures are interpreted in view of their seismic implications for the Chi-Chi earthquake in terms of slip weakening mechanisms by thermal pressurization, gouge fluidization, co-seismic fluid distribution and post-seismic slip. Above the PSZ, several layers of compacted gouges containing deformed CCAs and gouge fragments correspond to several PSZ of past earthquakes similar to the Chi-Chi earthquake, and display a fault-parallel cleavage resulting from a low strain-rate pressure solution deformation mechanism that may be correlated to the inter-seismic periods.

  17. Global scale deposition of radioactivity from a large scale exchange

    SciTech Connect

    Knox, J.B.

    1983-10-01

    The global impact of radioactivity pertains to the continental scale and planetary scale deposition of the radioactivity in a delayed mode; it affects all peoples. Global deposition is distinct and separate from close-in fallout. Close-in fallout is delivered in a matter of a few days or less and is much studied in the literature of civilian defense. But much less studied is the matter of global deposition. The global deposition of radioactivity from the reference strategic exchange (5300 MT) leads to an estimated average whole body, total integrated dose of 20 rem for the latitudes of 30 to 50/sup 0/ in the Northern Hemisphere. Hotspots of deposited radioactivity can occur with doses of about 70 rem (winter) to 40 to 110 rem (summer) in regions like Europe, western Asia, western North Pacific, southeastern US, northeastern US, and Canada. The neighboring countries within a few hundred kilometers of areas under strategic nuclear attack can be impacted by the normal (termal close-in) fallout due to gravitational sedimentation with lethal radiation doses to unsheltered populations. In regard to the strategic scenario about 40% of the megatonage is assumed to be in a surface burst mode and the rest in the free air burst mode.

  18. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  19. Large-scale GW software development

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab

    Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.

  20. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  1. Pervasive Deformation of Sediments and Basement Within the Wharton Basin, Indian Ocean, and Relationship to Large > Mw 8 Intraplate Earthquakes

    NASA Astrophysics Data System (ADS)

    Bull, J. M.; Geersen, J.; McNeill, L. C.; Henstock, T.; Gaedicke, C.; Chamot-Rooke, N. R. A.; Delescluse, M.

    2015-12-01

    Large-magnitude intraplate earthquakes within the ocean basins are not well understood. The Mw 8.6 and Mw 8.2 strike-slip intraplate earthquakes on 11 April 2012, while clearly occurring in the equatorial Indian Ocean diffuse plate boundary zone, are a case in point, with disagreement on the nature of the focal mechanisms and the faults that ruptured. We use bathymetric and seismic reflection data from the rupture area of the earthquakes in the northern Wharton Basin to demonstrate pervasive brittle deformation between the Ninetyeast Ridge and the Sunda subduction zone. In addition to evidence of recent strike-slip deformation along approximately north-south-trending fossil fracture zones, we identify a new type of deformation structure in the Indian Ocean: conjugate Riedel shears limited to the sediment section and oriented oblique to the north-south fracture zones. The Riedel shears developed in the Miocene, at a similar time to the onset of diffuse deformation in the central Indian Ocean. However, left-lateral strike-slip reactivation of existing fracture zones started earlier, in the Paleocene to early Eocene, and compartmentalizes the Wharton Basin. Modeled rupture during the 11 April 2012 intraplate earthquakes is consistent with the location of two reactivated, closely spaced, approximately north-south-trending fracture zones. However, we find no evidence for WNW-ESE-trending faults in the shallow crust, which is at variance with most of the earthquake fault models.

  2. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults.

    PubMed

    Cowie, P A; Phillips, R J; Roberts, G P; McCaffrey, K; Zijerveld, L J J; Gregory, L C; Faure Walker, J; Wedmore, L N J; Dunai, T J; Binnie, S A; Freeman, S P H T; Wilcken, K; Shanks, R P; Huismans, R S; Papanikolaou, I; Michetti, A M; Wilkinson, M

    2017-03-21

    Many areas of the Earth's crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic (36)Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The (36)Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (10(4) yr; 10(2) km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting.

  3. LDRD LW Project Final Report:Resolving the Earthquake Source Scaling Problem

    SciTech Connect

    Mayeda, K; Felker, S; Gok, R; O'Boyle, J; Walter, W R; Ruppert, S

    2004-02-10

    The scaling behavior of basic earthquake source parameters such as the energy release per unit area of fault slip, quantitatively measured as the apparent stress, is currently in dispute. There are compelling studies that show apparent stress is constant over a wide range of moments (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001, Ide et al. 2003). Other equally compelling studies find the apparent stress increases with moment (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001; Richardson and Jordan, 2002). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. As one part of our LDRD project we convened a one-day workshop on July 24, 2003 in Livermore to review the current state of knowledge on this topic and discuss possible methods of resolution with many of the world's foremost experts.

  4. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults

    NASA Astrophysics Data System (ADS)

    Cowie, P. A.; Phillips, R. J.; Roberts, G. P.; McCaffrey, K.; Zijerveld, L. J. J.; Gregory, L. C.; Faure Walker, J.; Wedmore, L. N. J.; Dunai, T. J.; Binnie, S. A.; Freeman, S. P. H. T.; Wilcken, K.; Shanks, R. P.; Huismans, R. S.; Papanikolaou, I.; Michetti, A. M.; Wilkinson, M.

    2017-03-01

    Many areas of the Earth’s crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (104 yr 102 km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting.

  5. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults

    PubMed Central

    Cowie, P. A.; Phillips, R. J.; Roberts, G. P.; McCaffrey, K.; Zijerveld, L. J. J.; Gregory, L. C.; Faure Walker, J.; Wedmore, L. N. J.; Dunai, T. J.; Binnie, S. A.; Freeman, S. P. H. T.; Wilcken, K.; Shanks, R. P.; Huismans, R. S.; Papanikolaou, I.; Michetti, A. M.; Wilkinson, M.

    2017-01-01

    Many areas of the Earth’s crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (104 yr; 102 km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting. PMID:28322311

  6. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  7. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Cowlard, Adam J.; Rouvreau, Sebastien; Toth, Balazs; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  8. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Rouvreau, Sebastien; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant know how about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  9. Seismic hazard and risks based on the Unified Scaling Law for Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir; Nekrasova, Anastasia

    2014-05-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. Any kind of risk estimates R(g) at location g results from a convolution of the natural hazard H(g) with the exposed object under consideration O(g) along with its vulnerability V(O(g)). Note that g could be a point, or a line, or a cell on or under the Earth surface and that distribution of hazards, as well as objects of concern and their vulnerability, could be time-dependent. There exist many different risk estimates even if the same object of risk and the same hazard are involved. It may result from the different laws of convolution, as well as from different kinds of vulnerability of an object of risk under specific environments and conditions. Both conceptual issues must be resolved in a multidisciplinary problem oriented research performed by specialists in the fields of hazard, objects of risk, and object vulnerability, i.e. specialists in earthquake engineering, social sciences and economics. To illustrate this general concept, we first construct seismic hazard assessment maps based on the Unified Scaling Law for Earthquakes (USLE). The parameters A, B, and C of USLE, i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an area of linear size L, are used to estimate the expected maximum

  10. Evidence for large prehistoric earthquakes in the northern New Madrid Seismic Zone, central United States

    USGS Publications Warehouse

    Li, Y.; Schweig, E.S.; Tuttle, M.P.; Ellis, M.A.

    1998-01-01

    We surveyed the area north of New Madris, Missouri, for prehistoric liquefaction deposits and uncovered two new sites with evidence of pre-1811 earthquakes. At one site, located about 20 km northeast of New Madrid, Missouri, radiocarbon dating indicates that an upper sand blow was probably deposited after A.D. 1510 and a lower sand blow was deposited prior to A.D. 1040. A sand blow at another site about 45 km northeast of New Madrid, Missouri, is dated as likely being deposited between A.D.55 and A.D. 1620 and represents the northernmost recognized expression of prehistoric liquefaction likely related to the New Madrid seismic zone. This study, taken together with other data, supports the occurrence of at least two earthquakes strong enough to indcue liquefaction or faulting before A.D. 1811, and after A.D. 400. One earthquake probably occurred around AD 900 and a second earthquake occurred around A.D. 1350. The data are not yet sufficient to estimate the magnitudes of the causative earthquakes for these liquefaction deposits although we conclude that all of the earthquakes are at least moment magnitude M ~6.8, the size of the 1895 Charleston, Missouri, earthquake. A more rigorous estimate of the number and sizes of prehistoric earthquakes in the New Madrid sesmic zone awaits evaluation of additional sites.

  11. Large Scale CW ECRH Systems: Some considerations

    NASA Astrophysics Data System (ADS)

    Erckmann, V.; Kasparek, W.; Plaum, B.; Lechte, C.; Petelin, M. I.; Braune, H.; Gantenbein, G.; Laqua, H. P.; Lubiako, L.; Marushchenko, N. B.; Michel, G.; Turkin, Y.; Weissgerber, M.

    2012-09-01

    Electron Cyclotron Resonance Heating (ECRH) is a key component in the heating arsenal for the next step fusion devices like W7-X and ITER. These devices are equipped with superconducting coils and are designed to operate steady state. ECRH must thus operate in CW-mode with a large flexibility to comply with various physics demands such as plasma start-up, heating and current drive, as well as configurationand MHD - control. The request for many different sophisticated applications results in a growing complexity, which is in conflict with the request for high availability, reliability, and maintainability. `Advanced' ECRH-systems must, therefore, comply with both the complex physics demands and operational robustness and reliability. The W7-X ECRH system is the first CW- facility of an ITER relevant size and is used as a test bed for advanced components. Proposals for future developments are presented together with improvements of gyrotrons, transmission components and launchers.

  12. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  13. Large-Scale Pattern Discovery in Music

    NASA Astrophysics Data System (ADS)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  14. Large-Scale Structures of Planetary Systems

    NASA Astrophysics Data System (ADS)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  15. Vulnerability of Eastern Caribbean Islands Economies to Large Earthquakes: The Trinidad and Tobago Case Study

    NASA Astrophysics Data System (ADS)

    Lynch, L.

    2015-12-01

    The economies of most of the Anglo-phone Eastern Caribbean islands have tripled to quadrupled in size since independence from England. There has also been commensurate growth in human and physical development as indicated by macro-economic indices such as Human Development Index and Fixed Capital Formation. A significant proportion of the accumulated wealth is invested in buildings and infrastructure which are highly susceptible to strong ground motion since the region is located along an active plate boundary. In the case of Trinidad and Tobago, Fixed Capital Formation accumulation since 1980 is almost US200 billion dollars. Recent studies have indicated that this twin island state is at significant risk from several seismic sources, both on land and offshore. To effectively mitigate the risk it is necessary to prescribe long-term measures such as the development and implementation of building code and standards, structural retrofitting, land use planning, preparedness planning and risk transfer mechanisms. The record has shown that Trinidad and Tobago has been been slow in the prescribing such measures which has consequently compounded it vulnerability to large earthquakes. This assessment reveals that the losses from a large (magnitude 7+) on land or an extreme (magnitude 8+) event could result in losses of up to US28B and that current risk transfer measures will only cater for less than ten percent of such losses.

  16. Regional and stress drop effects on aftershock productivity of large megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Wetzler, Nadav; Brodsky, Emily E.; Lay, Thorne

    2016-12-01

    The total number of aftershocks increases with main shock magnitude, resulting in an overall well-defined relationship. Observed variations from this trend prompt questions regarding influences of regional environment and individual main shock rupture characteristics. We investigate how aftershock productivity varies regionally and with main shock source parameters for large (Mw ≥ 7.0) circum-Pacific megathrust earthquakes within the past 25 years, drawing on extant finite-fault rupture models. Aftershock productivity is found to be higher for subduction zones of the western circum-Pacific than for subduction zones in the eastern circum-Pacific. This appears to be a manifestation of differences in faulting susceptibility between island arcs and continental arcs. Surprisingly, events with relatively large static stress drop tend to produce fewer aftershocks than comparable magnitude events with lower stress drop; however, for events with similar coseismic rupture area, aftershock productivity increases with stress drop and radiated energy, indicating a significant impact of source rupture process on productivity.

  17. Rare, large earthquakes at the laramide deformation front - Colorado (1882) and Wyoming (1984)

    USGS Publications Warehouse

    Spence, W.; Langer, C.J.; Choy, G.L.

    1996-01-01

    The largest historical earthquake known in Colorado occurred on 7 November 1882. Knowledge of its size, location, and specific tectonic environment is important for the design of critical structures in the rapidly growing region of the Southern Rocky Mountains. More than one century later, on 18 October 1984, an mb 5.3 earthquake occurred in the Laramie Mountains, Wyoming. By studying the 1984 earthquake, we are able to provide constraints on the location and size of the 1882 earthquake. Analysis of broadband seismic data shows the 1984 mainshock to have nucleated at a depth of 27.5 ?? 1.0 km and to have ruptured ???2.7 km updip, with a corresponding average displacement of about 48 cm and average stress drop of about 180 bars. This high stress drop may explain why the earthquake was felt over an area about 3.5 times that expected for a shallow earthquake of the same magnitude in this region. A microearthquake survey shows aftershocks to be just above the mainshock's rupture, mostly in a volume measuring 3 to 4 km across. Focal mechanisms for the mainshock and aftershocks have NE-SW-trending T axes, a feature shared by most earthquakes in western Colorado and by the induced Denver earthquakes of 1967. The only data for the 1882 earthquake were intensity reports from a heterogeneously distributed population. Interpretation of these reports also might be affected by ground-motion amplification from fluvial deposits and possible significant focal depth for the mainshock. The primary aftershock of the 1882 earthquake was felt most strongly in the northern Front Range, leading Kirkham and Rogers (1985) to locate the epicenters of the aftershock and mainshock there. The Front Range is a geomorphic extension of the Laramie Mountains. Both features are part of the eastern deformation front of the Laramide orogeny. Based on knowledge of regional tectonics and using intensity maps for the 1984 and the 1967 Denver earthquakes, we reinterpret prior intensity maps for the 1882

  18. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    USGS Publications Warehouse

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  19. Scaling relationship between corner frequencies and seismic moments of ultra micro earthquakes estimated with coda-wave spectral ratio -the Mponeng mine in South Africa

    NASA Astrophysics Data System (ADS)

    Wada, N.; Kawakata, H.; Murakami, O.; Doi, I.; Yoshimitsu, N.; Nakatani, M.; Yabe, Y.; Naoi, M. M.; Miyakawa, K.; Miyake, H.; Ide, S.; Igarashi, T.; Morema, G.; Pinder, E.; Ogasawara, H.

    2011-12-01

    Scaling relationship between corner frequencies, fc, and seismic moments, Mo is an important clue to understand the seismic source characteristics. Aki (1967) showed that Mo is proportional to fc-3 for large earthquakes (cubic law). Iio (1986) claimed breakdown of the cubic law between fc and Mo for smaller earthquakes (Mw < 2), and Gibowicz et al. (1991) also showed the breakdown for the ultra micro and small earthquakes (Mw < -2). However, it has been reported that the cubic law holds even for micro earthquakes (-1 < Mw > 4) by using high quality data observed at a deep borehole (Abercrombie, 1995; Ogasawara et al., 2001; Hiramatsu et al., 2002; Yamada et al., 2007). In order to clarify the scaling relationship for smaller earthquakes (Mw < -1), we analyzed ultra micro earthquakes using very high sampling records (48 kHz) of borehole seismometers installed within a hard rock at the Mponeng mine in South Africa. We used 4 tri-axial accelerometers of three-component that have a flat response up to 25 kHz. They were installed to be 10 to 30 meters apart from each other at 3,300 meters deep. During the period from 2008/10/14 to 2008/10/30 (17 days), 8,927 events were recorded. We estimated fc and Mo for 60 events (-3 < Mw < -1) within 200 meters from the seismometers. Assuming the Brune's source model, we estimated fc and Mo from spectral ratios. Common practice is using direct waves from adjacent events. However, there were only 5 event pairs with the distance between them less than 20 meters and Mw difference over one. In addition, the observation array is very small (radius less than 30 m), which means that effects of directivity and radiation pattern on direct waves are similar at all stations. Hence, we used spectral ratio of coda waves, since these effects are averaged and will be effectively reduced (Mayeda et al., 2007; Somei et al., 2010). Coda analysis was attempted only for relatively large 20 events (we call "coda events" hereafter) that have coda energy

  20. Change in paleo-stress state before and after large earthquake, in the Chelung-pu fault, Taiwan

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Kota, T.; Yeh, E. C.; Lin, W.

    2014-12-01

    Stress state close to seismogenic fault is a key parameter to understand earthquake mechanics. Changes in stress state after large earthquakes were documented recently in the 1999 Chi-Chi earthquake, Taiwan, and 2011 Tohoku-Oki earthquake, Northeast Japan. If the temporal changes are common in the past and in the future, the change in paleostress related to large earthquakes are expected to be obtained from micro-faults preserved in outcrops or drilled cores. In this study, we show a change in paleostress from micro-fault slip data observed around the Chelung-pu fault in the Taiwan Chelung-pu fault Drilling Project (TCDP), which is possibly associated with the stress drop by large earthquakes along the Chelung-pu fault. Combining obtained stress orientations, stress ratio and stress polygons, stress magnitude for each stress state and difference in stress magnitude between obtained stresses are estimated. For stress inversion analysis, multiple inversion method (MIM, Yamaji et al., 2000) was carried out. To estimate the centers of clusters automatically, K-means clustering (Otsubo et al., 2006) was conducted on the result of MIM. In the result, four stress states were estimated. The stress states are named C1, C2, C3 and C4 in ascending order of stress ratio (Φ). Stress ratio is defined as (σ1-σ2) / (σ1-σ3). To constraint the stress magnitude, stress polygons are employed combining with the inverted stress states. The principal stress vectors for four stress states (C1-C4) was projected to the SHmax or the Shmin and vertical stress directions. SHmax is larger than Shmin as definition. Stress ratio was estimated by inversion method. Combining those conditions, a linear function in SHmax and Shmin space respected to Sv is obtained from inverted stress states. We obtained two groups of stress state from the slip data in the TCDP core. One stress state has WNW-ESE horizontal sigma1 and larger stress magnitude including reverse fault regime. Another stress state

  1. Uplifted marine terraces in Davao Oriental Province, Mindanao Island, Philippines and their implications for large prehistoric offshore earthquakes along the Philippine trench

    NASA Astrophysics Data System (ADS)

    Ramos, Noelynna T.; Tsutsumi, Hiroyuki; Perez, Jeffrey S.; Bermas, Percival P.

    2012-02-01

    We conducted systematic mapping of Holocene marine terraces in eastern Mindanao Island, Philippines for the first time. Raised marine platforms along the 80-km-long coastline of eastern Davao Oriental Province are geomorphic evidence of tectonic deformation resulting from the westward subduction of the Philippine Sea plate along the Philippine trench. Holocene coral platforms consist of up to four terrace steps: T1: 1-5 m, T2: 3-6 m, T3: 6-10 m, and T4: 8-12 m amsl, from the lowest to highest, respectively. Terraces are subhorizontal, exposing cemented coral shingle and eroded coral heads, while terrace risers are 1-3 m high. Radiocarbon ages, 8080-4140 cal yr BP, reveal that erosional surfaces were carved onto the Holocene transgressive reef complex which grew upward until ˜8000 years ago. The maximum uplift rate is ˜1.5 mm/yr based on the highest Holocene terrace at <11.4 m amsl. The staircase topography and meter-scale terrace risers infer that at least four large earthquakes have uplifted the coast in the past ˜8000 years. The deformation pattern of the terraces further suggests that seismic sources are probably located offshore. However, historical earthquakes as large as M W 7.5 along the Philippine trench were not large enough to produce meter-scale coastal uplift, suggesting that much larger earthquakes occurred in the past. A long-term tectonic uplift rate of ˜1.3 mm/yr was also estimated based on Late Pleistocene terraces.

  2. Evaluating the role of large earthquakes on aquifer dynamics using data fusion and knowledge discovery techniques

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Cox, Simon; Williams, Charles; Holden, Caroline

    2016-04-01

    Artificial adaptive systems are evaluated for their usefulness in modeling earthquake hydrology of the Canterbury region, NZ. For example, an unsupervised machine-learning technique, self-organizing map, is used to fuse about 200 disparate and sparse data variables (such as, well pressure response, ground acceleration, intensity, shaking, stress and strain; aquifer and well characteristics) associated with the M7.1 Darfield earthquake in 2010 and the M6.3 Christchurch earthquake in 2011. The strength of correlations, determined using cross-component plots, varied between earthquakes with pressure changes more strongly related to dynamic- than static stress-related variables during the M7.1 earthquake, and vice versa during the M6.3. The method highlights the importance of data distribution and that driving mechanisms of earthquake-induced pressure change in the aquifers are not straight forward to interpret. In many cases, data mining revealed that confusion and reduction in correlations are associated with multiple trends in the same plot: one for confined and one for unconfined earthquake response. The autocontractive map and minimum spanning tree techniques are used for grouping variables of similar influence on earthquake hydrology. K-means clustering of neural information identified 5 primary regions influenced by the two earthquakes. The application of genetic doping to a genetic algorithm is used for identifying optimal subsets of variables in formulating predictions of well pressures. Predictions of well pressure changes are compared and contrasted using machine-learning network and symbolic regression models with prediction uncertainty quantified using a leave-one-out cross-validation strategy. These preliminary results provide impetus for subsequent analysis with information from another 100 earthquakes that occurred across the South Island.

  3. Demonstration of Mobile Auto-GPS for Large Scale Human Mobility Analysis

    NASA Astrophysics Data System (ADS)

    Horanont, Teerayut; Witayangkurn, Apichon; Shibasaki, Ryosuke

    2013-04-01

    The greater affordability of digital devices and advancement of positioning and tracking capabilities have presided over today's age of geospatial Big Data. Besides, the emergences of massive mobile location data and rapidly increase in computational capabilities open up new opportunities for modeling of large-scale urban dynamics. In this research, we demonstrate the new type of mobile location data called "Auto-GPS" and its potential use cases for urban applications. More than one million Auto-GPS mobile phone users in Japan have been observed nationwide in a completely anonymous form for over an entire year from August 2010 to July 2011 for this analysis. A spate of natural disasters and other emergencies during the past few years has prompted new interest in how mobile location data can help enhance our security, especially in urban areas which are highly vulnerable to these impacts. New insights gleaned from mining the Auto-GPS data suggest a number of promising directions of modeling human movement during a large-scale crisis. We question how people react under critical situation and how their movement changes during severe disasters. Our results demonstrate a case of major earthquake and explain how people who live in Tokyo Metropolitan and vicinity area behave and return home after the Great East Japan Earthquake on March 11, 2011.

  4. Information Tailoring Enhancements for Large-Scale Social Data

    DTIC Science & Technology

    2016-06-15

    Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale...Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...also gathers information about entities from all news articles and displays it on over one million entity pages [5][6], and the information is made

  5. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable.

  6. Large scale static and dynamic friction experiments

    SciTech Connect

    Bakhtar, K.; Barton, N.

    1984-12-31

    A series of nineteen shear tests were performed on fractures 1 m/sup 2/ in area, generated in blocks of sandstone, granite, tuff, hydrostone and concrete. The tests were conducted under quasi-static and dynamic loading conditions. A vertical stress assisted fracturing technique was developed to create the fractures through the large test blocks. Prior to testing, the fractured surface of each block was characterized using the Barton JRC-JCS concept. the results of characterization were used to generate the peak strength envelope for each fractured surface. Attempts were made to model the stress path based on the classical transformation equations which assumes a theoretical plane, elastic isotropic properties, and therefore no slip. However, this approach gave rise to a stress path passing above the strength envelope which is clearly unacceptable. The results of the experimental investigations indicated that actual stress path is affected by the dilatancy due to fracture roughness, as well as by the side friction imposed by the boundary conditions. By introducing the corrections due to the dilation and boundary conditions into the stress transformation equation, the fully corrected stress paths for predicting the strength of fractured blocks were obtained.

  7. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  8. Nonlinear large-scale optimization with WORHP

    NASA Astrophysics Data System (ADS)

    Nikolayzik, Tim; Büskens, Christof; Gerdts, Matthias

    Nonlinear optimization has grown to a key technology in many areas of aerospace industry, e.g. satellite control, shape-optimization, aerodynamamics, trajectory planning, reentry prob-lems, interplanetary flights. One of the most extensive areas is the optimization of trajectories for aerospace applications. These problems typically are discretized optimal control problems, which leads to large sparse nonlinear optimization problems. In the end all these different problems from different areas can be described in the general formulation as a nonlinear opti-mization problem. WORHP is designed to solve nonlinear optimization problems with more then one million variables and one million constraints. WORHP uses a lot of different advanced techniques, e.g. reverse communication, to organize the optimization process as efficient and controllable by the user as possible. The solver has nine different interfaces, e.g. to MAT-LAB/SIMULINK and AMPL. Tests of WORHP had shown that WORHP is a very robust and promising solver. Several examples from space applications will be presented.

  9. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  10. Large-scale structural monitoring systems

    NASA Astrophysics Data System (ADS)

    Solomon, Ian; Cunnane, James; Stevenson, Paul

    2000-06-01

    Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.

  11. Large-Scale Magnetic Connectivity in CMEs

    NASA Astrophysics Data System (ADS)

    Zhang, Yuzong; Wang, Jingxiu; Attrill, Gemma; Harra, Louise K.

    Five flare/CME events were selected in this study. One is on May 12, 1997, for which there is only two active regions on the visible solar disc, and the magnetic configuration is rather simple. For other cases, many active regions were visible. They are the flare/CME events that occurred on Bastille Day of 2000, Oct. 28, 2003, Nov. 7, 2004 and Jan. 20, 2005. By tracing the spread of EUV dimming, which was obtained by SOHO/EIT 195 Å fixed-difference images, we studied the CME initiation and development on the solar disc. At the same time we reconstructed the 3D magnetic structure of coronal magnetic fields, extrapolated from the observed photospheric magnetograms by SOHO/MDI. In scrutinizing the EUV brightening and dimming propagation from CME initiation sites to large areas with different magnetic connectivities, we determine the overall coupling and interacting of multiple flux systems in the CME processes. Several typical patterns of magnetic connectivity are described and discussed in the view of CME initiation mechanism or mechanisms.

  12. Impact Cratering Physics al Large Planetary Scales

    NASA Astrophysics Data System (ADS)

    Ahrens, Thomas J.

    2007-06-01

    Present understanding of the physics controlling formation of ˜10^3 km diameter, multi-ringed impact structures on planets were derived from the ideas of Scripps oceanographer, W. Van Dorn, University of London's, W, Murray, and, Caltech's, D. O'Keefe who modeled the vertical oscillations (gravity and elasticity restoring forces) of shock-induced melt and damaged rock within the transient crater immediately after the downward propagating hemispheric shock has processed rock (both lining, and substantially below, the transient cavity crater). The resulting very large surface wave displacements produce the characteristic concentric, multi-ringed basins, as stored energy is radiated away and also dissipated upon inducing further cracking. Initial calculational description, of the above oscillation scenario, has focused upon on properly predicting the resulting density of cracks, and, their orientations. A new numerical version of the Ashby--Sammis crack damage model is coupled to an existing shock hydrodynamics code to predict impact induced damage distributions in a series of 15--70 cm rock targets from high speed impact experiments for a range of impactor type and velocity. These are compared to results of crack damage distributions induced in crustal rocks with small arms impactors and mapped ultrasonically in recent Caltech experiments (Ai and Ahrens, 2006).

  13. Complexities of the San Andreas fault near San Gorgonio Pass: Implications for large earthquakes

    NASA Astrophysics Data System (ADS)

    Yule, Doug; Sieh, Kerry

    2003-11-01

    Geologic relationships and patterns of crustal seismicity constrain the three-dimensional geometry of the active portions of San Andreas fault zone near San Gorgonio Pass, southern California. Within a 20-km-wide contractional stepover between two segments of the fault zone, the San Bernardino and Coachella Valley segments, folds, and dextral-reverse and dextral-normal faults form an east-west belt of active structures. The dominant active structure within the stepover is the San Gorgonio Pass-Garnet Hill faults, a dextral-reverse fault system that dips moderately northward. Within the hanging wall block of the San Gorgonio Pass-Garnet Hill fault system are subsidiary active dextral and dextral-normal faults. These faults relate in complex but understandable ways to the strike-slip faults that bound the stepover. The pattern of crustal seismicity beneath these structures includes a 5-8 km high east-west striking step in the base of crustal seismicity, which corresponds to the downdip limit of rupture of the 1986 North Palm Springs earthquake. We infer that this step has been produced by slip on the linked San Gorgonio Pass-Garnet Hill-Coachella Valley Banning (SGP-GH-CVB) fault. This association enables us to construct a structure contour map of the fault plane. The large step in the base of seismicity downdip from the SGP-GH-CVB fault system probably reflects a several kilometers offset of the midcrustal brittle-plastic transition. (U/Th)/He thermochronometry supports our interpretation that this south-under-north thickening of the crust has created the region's 3 km of topographic relief. We conclude that future large earthquakes generated along the San Andreas fault in this region will have a multiplicity of mostly specifiable sources having dimensions of 1-20 km. Two tasks in seismic hazard evaluation may now be attempted with greater confidence: first, the construction of synthetic seismograms that make useful predictions of ground shaking, and second

  14. Geomorphic Evidence for Multiple Large Post-glacial Earthquakes on the Western Seattle Fault

    NASA Astrophysics Data System (ADS)

    Haugerud, R. A.; Tabor, R. W.

    2008-12-01

    An apparently-warped late-glacial outwash surface west of Bremerton suggests at least 23 meters of post-16 ka differential uplift across the western end of the Seattle fault. If the 7-9 meter vertical offset during the 900 AD earthquake farther east is typical of large Seattle fault events, deformation of such magnitude indicates 3 large earthquakes on this segment of the Seattle fault in the last 16,000 years. Geomorphic mapping from lidar topography (6 ft DEM from 1 pulse/m2 data aquired in leaf-off conditions, 2000 and 2001, survey contracted by the Puget Sound Lidar Consortium, data and DEM available at http://pugetsoundlidar.ess.washington.edu) outlines extensive relict alluvial flats. These flats were formed by meltwater that flowed south from the decaying Puget Lobe of the Cordilleran Ice Sheet during its last retreat about 16,000 years ago. In the Wildcat Lake 7.5-minute quadrangle, west of Bremerton, one of these flats extends south from upper Big Beef Creek, past William Symington Lake, and into the headwaters of the Tahuya River. This flat slopes southwards except for the part that extends from Symington Lake to the Big Beef-Tahuya divide, which slopes gently north. Projection of alluvial-flat elevations onto a north-south cross-section and correction for 0.1% up-to-the north tilting by post-glacial isostatic rebound closely defines a smooth surface with a 23 meter elevation difference between the low at Symington Lake and the high at the Big Beef-Tahuya divide 5 km to the SW. Inclusion of the (unknown) paleo-gradient of the outwash stream would increase the amount of offset. No surface scarps are evident in this region, which suggests that young surface deformation records folding above a buried fault. The primary uncertainty in this analysis is the inference that the Big Beef-Symington-Tahuya flat was formed by a continuous south-flowing outwash stream. If the flat included a paleo-divide that separated N- from S- flowing streams, the observed

  15. Large Scale Interconnections Using Dynamic Gratings

    NASA Astrophysics Data System (ADS)

    Pauliat, Gilles; Roosen, Gerald

    1987-01-01

    Optics is attractive for interconnects because the possibility of crossing without any interaction multiple light beams. A crossbar network can be achieved using holographic elements which permit to connect independently all inputs and all outputs. The incorporation of dynamic holographic materials is enticing as this will render the interconnection changeable. However, it is necessary to find first a passive method permitting to achieve beam deflection and secondly a photosensitive material of high optical quality requiring low power levels to optically induce the refractive index changes. We first describe an optical method allowing to produce very large deflections of light beams thus enabling to randomly address any spot on a plane. Such a technique appears applicable to both interconnections of VLSI chips and random access of optical memories. Our scheme for realizing dynamic optical interconnects is based on Bragg diffraction of the beam to steer by a dynamic phase grating which spacing and orientation are changeable in real time. This is achieved in a passive way by acting on the optical frequency of the control beams used to record the dynamic grating. Deflection angles of 15° have been experimentally demonstrated for a 27 nm shift in the control wavelength. For a larger wavelength scanning (50 nm), 28° deflections are anticipated while maintaining the Bragg condition satisfied. We then discuss some issues related to photosensitive materials able to dynamically record the optically induced refractive index change. The specific example of Bi12 Si 020 or Bi12 Ge 020 photorefractive crystals is presented. Indeed these materials are very attractive as they require low driving energy and exhibit a memory effect. This latter property permits to achieve numerous iterations between computing cells before reconfiguration of the interconnect network.

  16. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes.

    PubMed

    Passarelli, L; Rivalta, E; Shuler, A

    2014-01-28

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process.

  17. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    PubMed Central

    L., Passarelli; E., Rivalta; A., Shuler

    2014-01-01

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process. PMID:24469260

  18. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  19. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  20. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  1. What caused a large number of fatalities in the Tohoku earthquake?

    NASA Astrophysics Data System (ADS)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  2. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  3. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  4. Apparent break in earthquake scaling due to path and site effects on deep borehole recordings

    USGS Publications Warehouse

    Ide, S.; Beroza, G.C.; Prejean, S.G.; Ellsworth, W.L.

    2003-01-01

    We reexamine the scaling of stress drop and apparent stress, rigidity times the ratio between seismically radiated energy to seismic moment, with earthquake size for a set of microearthquakes recorded in a deep borehole in Long Valley, California. In the first set of calculations, we assume a constant Q and solve for the corner frequency and seismic moment. In the second set of calculations, we model the spectral ratio of nearby events to determine the same quantities. We find that the spectral ratio technique, which can account for path and site effects or nonconstant Q, yields higher stress drops, particularly for the smaller events in the data set. The measurements determined from spectral ratios indicate no departure from constant stress drop scaling down to the smallest events in our data set (Mw 0.8). Our results indicate that propagation effects can contaminate measurements of source parameters even in the relatively clean recording environment of a deep borehole, just as they do at the Earth's surface. The scaling of source properties of microearthquakes made from deep borehole recordings may need to be reevaluated.

  5. Scaling earthquake ground motions for performance-based assessment of buildings

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.

    2011-01-01

    The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.

  6. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  7. Large mid-Holocene and late Pleistocene earthquakes on the Oquirrh fault zone, Utah

    USGS Publications Warehouse

    Olig, S.S.; Lund, W.R.; Black, B.D.

    1994-01-01

    The Oquirrh fault zone is a range-front normal fault that bounds the east side of Tooele Valley and it has long been recognized as a potential source for large earthquakes that pose a significant hazard to population centers along the Wasatch Front in central Utah. Scarps of the Oquirrh fault zone offset the Provo shoreline of Lake Bonneville and previous studies of scarp morphology suggested that the most recent surface-faulting earthquake occurred between 9000 and 13,500 years ago. Based on a potential rupture length of 12 to 21 km from previous mapping, moment magnitude (Mw) estimates for this event range from 6.3 to 6.6 In contrast, our results from detailed mapping and trench excavations at two sites indicate that the most-recent event actually occurred between 4300 and 6900 yr B.P. (4800 and 7900 cal B.P.) and net vertical displacements were 2.2 to 2.7 m, much larger than expected considering estimated rupture lengths for this event. Empirical relations between magnitude and displacement yield Mw 7.0 to 7.2. A few, short discontinuous fault scarps as far south as Stockton, Utah have been identified in a recent mapping investigation and our results suggest that they may be part of the Oquirrh fault zone, increasing the total fault length to 32 km. These results emphasize the importance of integrating stratigraphic and geomorphic information in fault investigations for earthquake hazard evaluations. At both the Big Canyon and Pole Canyon sites, trenches exposed faulted Lake Bonneville sediments and thick wedges of fault-scarp derived colluvium associated with the most-recent event. Bulk sediment samples from a faulted debris-flow deposit at the Big Canyon site yield radiocarbon ages of 7650 ?? 90 yr B.P. and 6840 ?? 100 yr B.P. (all lab errors are ??1??). A bulk sediment sample from unfaulted fluvial deposits that bury the fault scarp yield a radiocarbon age estimate of 4340 ?? 60 yr B.P. Stratigraphic evidence for a pre-Bonneville lake cycle penultimate

  8. Coulomb stress changes in the South Iceland Seismic Zone due to two large earthquakes in June 2000

    NASA Astrophysics Data System (ADS)

    Arnadottir, Th.; Jonsson, S.; Pedersen, R.; Gudmundsson, G.

    2003-04-01

    The South Iceland Seismic Zone experienced the largest earthquakes for 88 years in June 2000. The earthquake sequence started with a M_S=6.6 earthquake on June 17, 2000 (15:40:41 UTC), located at 63.975^oN, 20.370^oW and 6.3 km depth. A second large event (M_S=6.6) occurred on June 21, 2000 (00:51:47 UTC), located 17 km west of the June 17 rupture, at 63.977^oN, 20.713^oW and 5.1 km depth. The June 17 and 21 mainshocks ruptured two parallel N--S striking, right-lateral strike slip faults. Seismicity increased over a large area in SW Iceland following the June 17 mainshock, with most of the off-fault activity located west and north of the epicenter. Surface waves from the June 17 mainshock probably triggered significant slip on three faults on the Reykjanes Peninsula. Less activity appears to have been triggered in the Hengill area and on Reykjanes Peninsula following the June 21 earthquake, although it occurred closer to these areas than the June 17 event. Coseismic crustal deformation due to these earthquakes was observed with continuous and network GPS and Interferometric Synthetic Aperture Radar (InSAR). The geodetic data have been combined to estimate fault geometries and distributed slip models for the June 17 and 21 mainshocks. In this study we use these slip models to calculate the static Coulomb failure stress (CFS) change for the June 2000 earthquakes. We find that the static CFS change caused by the June 17 event is about 0.1 MPa at the location of the June 21 hypocenter, promoting failure on the second fault. The locations of aftershocks agree well with areas of increased CFS. Seismicity in areas where the CFS increase was less than 0.01 MPa, such as on Reykjanes Peninsula and the Hengill volcanic area, may have been dynamically triggered. Our calculations indicate a positive CFS change in the area west of the southern end of the June 21 rupture, due to the two June 2000 mainshocks, which correlates well with a significant increase in seismicity

  9. Revised dates of large earthquakes along the Carrizo section of the San Andreas Fault, California, since A.D. 1310 ± 30

    NASA Astrophysics Data System (ADS)

    Akciz, Sinan O.; Grant Ludwig, Lisa; Arrowsmith, J. Ramon

    2009-01-01

    Precise knowledge of the age and magnitude of past earthquakes is essential for characterizing models of earthquake recurrence and key to forecasting future earthquakes. We present 28 new radiocarbon analyses that refine the chronology of the last five earthquakes at the Bidart Fan site along the Carrizo section of the south central San Andreas Fault, which last ruptured during the Fort Tejon earthquake in A.D. 1857. The new data show that the penultimate earthquake in the Carrizo Plain occurred not earlier than A.D. 1640 and the modeled 95th percentile ranges of the three earlier earthquakes (and their mean) are A.D. 1540-1630 (1585), A.D. 1360-1425 (1393), and A.D. 1280-1340 (1310), indicating an average time interval of 137 ± 44 years between large earthquakes since A.D. 1310 ± 30. A robust earthquake recurrence model of the Carrizo section will require even more well-dated earthquakes for thorough characterization. However, these new data imply that since A.D. 1310 ± 30, the Carrizo section has failed more regularly and more often than previously thought.

  10. Large-scale velocity structures in turbulent thermal convection.

    PubMed

    Qiu, X L; Tong, P

    2001-09-01

    A systematic study of large-scale velocity structures in turbulent thermal convection is carried out in three different aspect-ratio cells filled with water. Laser Doppler velocimetry is used to measure the velocity profiles and statistics over varying Rayleigh numbers Ra and at various spatial positions across the whole convection cell. Large velocity fluctuations are found both in the central region and near the cell boundary. Despite the large velocity fluctuations, the flow field still maintains a large-scale quasi-two-dimensional structure, which rotates in a coherent manner. This coherent single-roll structure scales with Ra and can be divided into three regions in the rotation plane: (1) a thin viscous boundary layer, (2) a fully mixed central core region with a constant mean velocity gradient, and (3) an intermediate plume-dominated buffer region. The experiment reveals a unique driving mechanism for the large-scale coherent rotation in turbulent convection.

  11. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  12. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  13. Possible minimum depths of large historical earthquakes in eastern North America

    SciTech Connect

    Acharya, H.

    1980-08-01

    For instrumentally recorded earthquakes of magnitude >5.0 in northeastern North America, focal depth has been estimated using corner periods. Agreement has been noted between these estimates and estimates obtained by assuming a spherical source and using absence of surface faulting as a boundary condition. This suggests that in eastern North America the minimum depth of earthquakes with magnitude m/sub blg/>6.0 would be about 10 km.

  14. Analog earthquakes

    SciTech Connect

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.