Science.gov

Sample records for large scale earthquakes

  1. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  4. Earthquake triggering and large-scale geologic storage of carbon dioxide

    PubMed Central

    Zoback, Mark D.; Gorelick, Steven M.

    2012-01-01

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO2 emissions associated with coal-based electrical power generation and other industrial sources of CO2 [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185–5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO2 into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO2 repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  5. Large scale dynamic rupture scenario of the 2004 Sumatra-Andaman megathrust earthquake

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Madden, Elizabeth H.; Wollherr, Stephanie; Gabriel, Alice A.

    2016-04-01

    The Great Sumatra-Andaman earthquake of 26 December 2004 is one of the strongest and most devastating earthquakes in recent history. Most of the damage and the ~230,000 fatalities were caused by the tsunami generated by the Mw 9.1-9.3 event. Various finite-source models of the earthquake have been proposed, but poor near-field observational coverage has led to distinct differences in source characterization. Even the fault dip angle and depth extent are subject to debate. We present a physically realistic dynamic rupture scenario of the earthquake using state-of-the-art numerical methods and seismotectonic data. Due to the lack of near-field observations, our setup is constrained by the overall characteristics of the rupture, including the magnitude, propagation speed, and extent along strike. In addition, we incorporate the detailed geometry of the subducting fault using Slab1.0 to the south and aftershock locations to the north, combined with high-resolution topography and bathymetry data.The possibility of inhomogeneous background stress, resulting from the curved shape of the slab along strike and the large fault dimensions, is discussed. The possible activation of thrust faults splaying off the megathrust in the vicinity of the hypocenter is also investigated. Dynamic simulation of this 1300 to 1500 km rupture is a computational and geophysical challenge. In addition to capturing the large-scale rupture, the simulation must resolve the process zone at the rupture tip, whose characteristic length is comparable to smaller earthquakes and which shrinks with propagation distance. Thus, the fault must be finely discretised. Moreover, previously published inversions agree on a rupture duration of ~8 to 10 minutes, suggesting an overall slow rupture speed. Hence, both long temporal scales and large spatial dimensions must be captured. We use SeisSol, a software package based on an ADER-DG scheme solving the spontaneous dynamic earthquake rupture problem with high

  6. Optimization and Scalability of an Large-scale Earthquake Simulation Application

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Olsen, K. B.; Hu, Y.; Day, S.; Dalguer, L. A.; Minster, B.; Moore, R.; Zhu, J.; Maechling, P.; Jordan, T.

    2006-12-01

    In 2004, the Southern California Earthquake Center (SCEC) initiated a major large-scale earthquake simulation, called TeraShake. The TeraShake propagated seismic waves across a domain of 600 km by 300 km by 80 km at 200 meter resolution and 1.8 billion grid points, some of the largest and most detailed earthquake simulations of the southern San Andres fault. The TeraShake 1 code is based on a 4th order FD Anelastic Wave Propagation Model (AWM), developed by K. Olsen, using a kinematic source description. The enhanced TeraShake 2 then added a new physics-based dynamic component, with the new capability to very- large scale earthquake simulations. A high 100 m resolution was used to generate a physically realistic earthquake source description for the San Andreas fault. The executions of very-large scale TeraShake 2 simulations with the high-resolution dynamic source used up to 1024 processors on the TeraGrid, adding more than 60 TB of simulation output in the 168 TB SCEC digital library, managed by the SDSC Storage Resource Broker (SRB) at SDSC. The execution of these large simulations requires high levels of expertise and resource coordination. We examine the lessons learned in enabling the execution of the TeraShake application. In particular, we look at challenges imposed for the single-processor optimization of the application performance, optimization of the I/O handling and optimization of the run initialization, and the execution of the data-intensive simulations. The TeraShake code was optimized to improve scalability to 2048 processors, with a parallel efficiency of 84%. Our latest TeraShake simulation sustains 1 Teraflop/s performance, completing a simulation in less than 9 hours on the SDSC Datastar. This is more than 10 times faster than previous TeraShake simulations. Some of the TeraShake production simulations were carried out using grid computing resources, including the execution on NCSA TeraGrid resources, and run-time archiving outputs onto SDSC

  7. Rupture propagation speed during earthquake faulting reproduced by large-scale biaxial friction experiments

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.; Fukuyama, E.; Yamashita, F.; Takizawa, S.; Kawakata, H.

    2013-12-01

    Earthquakes are generated by unstable frictional slip along pre-existing faults. Both laboratory experiments and numerical simulations have shown that the rupture process involves an initial quasi-static phase, a subsequent accelerating phase and a main dynamic rupture phase. During the main phase, the rupture front propagates at either subshear or supershear velocity, which affects the seismic wave radiation pattern. An examination on what controls the speed is crucial for improvement of earthquake hazard mitigation. Thus We conducted stick-slip experiments on meter-scale Indian gabbro rocks to observe the rupture process of the unstable periodic slip events and to measure the rupture speed along the fault. The simulated fault plane is 1.5m in length and 0.1m in width and ground by #200-300. The fault is applied at a constant normal stress of 6.7MPa and sheared parallel to the longitudinal direction of the fault at a slip rate of 0.1mm/s and up to a displacement of 40cm. The long, narrow fault geometry leads to in-plane shear rupture (mode II). in which the rupture front propagates in the direction of slip, which mimics large strike-slip earthquake faulting. Compressional-(Vp) and shear-(Vs) wave velocities of the rock sample are calculated to be 6.92km/s and 3.62km/s, respectively, based on the elastic properties (Young's modulus, 103GPa; Poisson's ratio, 0.331; Shear modulus, 38GPa). 32 biaxial strain gauges for shear strain and 16 single-axis strain gauges for normal strain were attached along the longitudinal direction of the fault at intervals of 5cm and 10cm, respectively. The local strain data were recorded at a sampling rate of 1MHz within 16 bit resolution. Load calls attached outside the fault plane measured the whole normal and shear forces applied on the fault plane, which was recorded by the same recording system. We have confirmed that the rupture process of unstable slip events consistsing of 1) an initial quasi-static phase where the slipped area

  8. Reconsidering earthquake scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Wech, A.; Creager, K.; Obara, K.; Agnew, D.

    2016-06-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  9. Earthquake Scaling, Simulation and Forecasting

    NASA Astrophysics Data System (ADS)

    Sachs, Michael Karl

    Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tohoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events. In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional

  10. A parallel implementation of the Lattice Solid Model for large scale simulation of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Abe, S.; Place, D.; Mora, P.

    2001-12-01

    The particle based lattice solid model has been used successfully as a virtual laboratory to simulate the dynamics of faults, earthquakes and gouge processes. The phenomena investigated with the lattice solid model range from the stick-slip behavior of faults, localization phenomena in gouge and the evolution of stress correlation in multi-fault systems, to the influence of rate and state-dependent friction laws on the macroscopic behavior of faults. However, the results from those simulations also show that in order to make a next step towards more realistic simulations it will be necessary to use three-dimensional models containing a large number of particles with a range of sizes, thus requiring a significantly increased amount of computing resources. Whereas the computing power provided by a single processor can be expected to double every 18 to 24 months, parallel computers which provide hundreds of times the computing power are available today and there are several efforts underway to construct dedicated parallel computers and associated simulation software systems for large-scale earth science simulation (e.g. The Australian Computational Earth Systems Simulator[1] and Japanese Earth Simulator[2])". In order to use the computing power made available by those large parallel computers, a parallel version of the lattice solid model has been implemented. In order to guarantee portability over a wide range of computer architectures, a message passing approach based on MPI has been used in the implementation. Particular care has been taken to eliminate serial bottlenecks in the program, thus ensuring high scalability on systems with a large number of CPUs. Measures taken to achieve this objective include the use of asynchronous communication between the parallel processes and the minimization of communication with and work done by a central ``master'' process. Benchmarks using models with up to 6 million particles on a parallel computer with 128 CPUs show that the

  11. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.; Ruppert, S.

    2002-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by analyzing aftershock sequences in the Western U.S. and Turkey using two different techniques. First we examine the observed regional S-wave spectra by fitting with a parametric model (Walter and Taylor, 2002) with and without variable stress drop scaling. Because the aftershock sequences have common stations and paths we can examine the S-wave spectra of events by size to determine what type of apparent stress scaling, if any, is most consistent with the data. Second we use regional coda envelope techniques (e.g. Mayeda and Walter, 1996; Mayeda et al, 2002) on the same events to directly measure energy and moment. The coda techniques corrects for path and site effects using an empirical Green function technique and independent calibration with surface wave derived moments. Our hope is that by carefully analyzing a very large number of events in a consistent manner using two different techniques we can start to resolve this apparent stress scaling issue. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  12. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  13. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  14. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Mayeda, K.; Walter, W. R.

    2003-04-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by applying the same methodology to a series of datasets that spans roughly 10 orders in seismic moment, M0. We will summarize recent results using a coda envelope methodology of Mayeda et al, (2003) which provide the most stable source spectral estimates to date. This methodology eliminates the complicating effects of lateral path heterogeneity, source radiation pattern, directivity, and site response (e.g., amplification, f-max and kappa). We find that in tectonically active continental crustal areas the total radiated energy scales as M00.25 whereas in regions of relatively younger oceanic crust, the stress drop is generally lower and exhibits a 1-to-1 scaling with moment. In addition to answering a fundamental question in earthquake source dynamics, this study addresses how one would scale small earthquakes in a particular region up to a future, more damaging earthquake. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  15. Earthquake sequence simulation of a multi-scale asperity model following rate and state friction - occurrence of large earthquakes by cascade up vs. own nucleation

    NASA Astrophysics Data System (ADS)

    Noda, H.; Nakatani, M.; Hori, T.

    2012-12-01

    Seismological observations [e.g., Abercrombie and Rice, 2005] suggest that a larger earthquake has larger fracture energy Gc. One way to realize such scaling is to assume a hierarchical patchy distribution of Gc on a fault; there are patches of different sizes with different Gc so that a larger patch has larger Gc. Ide and Aochi [2005] conducted dynamic rupture simulations with such a distribution of weakening distance Dc in a linear slip-weakening law, initiating ruptures on the smallest patch which sometimes grow up by cascading into a larger scale. They suggested that the initial phase of a large earthquake is indistinguishable from that of a small earthquake. In the present study we simulate a similar multi-scale asperity model but following rate and state friction (RSF), where stress and strength distribution resulting from the history of coseismic and aseismic slip influences the way of rupture initiation, growth, and arrest of a forthcoming earthquake. Multi-scale asperities were represented by a distribution of the state evolution distance dc in the aging version of RSF evolution law. Numerical scheme adopted [Noda and Lapsuta, 2010] is fully dynamic and 3D. We have modeled a circular rate-weakening patch, Patch L (radius R), which has a smaller patch, Patch S (radius r), in it by the rim. The ratio of the radii α = R/r is the amount of the gap between two scales. Patch L and Patch S respectively have nucleation sizes Rc and rc. The same brittleness β = R/Rc = r/rc is assumed for simplicity. We shall call an earthquake which ruptures only Patch S as an S-event, and one which ruptures Patch L, an L-event. We have conducted a series of simulations with α from 2 to 5 while keeping β = 3 until the end of the 20th L-event. If the patch S was relatively large (α = 2 and 2.5), only L-events occurred and they always dynamically cascaded up from a patch S rupture following small quasi-static nucleation there. If the patch S was small enough (α = 5), in

  16. Earthquake Scaling Relations

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Boettcher, M.; Richardson, E.

    2002-12-01

    Using scaling relations to understand nonlinear geosystems has been an enduring theme of Don Turcotte's research. In particular, his studies of scaling in active fault systems have led to a series of insights about the underlying physics of earthquakes. This presentation will review some recent progress in developing scaling relations for several key aspects of earthquake behavior, including the inner and outer scales of dynamic fault rupture and the energetics of the rupture process. The proximate observations of mining-induced, friction-controlled events obtained from in-mine seismic networks have revealed a lower seismicity cutoff at a seismic moment Mmin near 109 Nm and a corresponding upper frequency cutoff near 200 Hz, which we interpret in terms of a critical slip distance for frictional drop of about 10-4 m. Above this cutoff, the apparent stress scales as M1/6 up to magnitudes of 4-5, consistent with other near-source studies in this magnitude range (see special session S07, this meeting). Such a relationship suggests a damage model in which apparent fracture energy scales with the stress intensity factor at the crack tip. Under the assumption of constant stress drop, this model implies an increase in rupture velocity with seismic moment, which successfully predicts the observed variation in corner frequency and maximum particle velocity. Global observations of oceanic transform faults (OTFs) allow us to investigate a situation where the outer scale of earthquake size may be controlled by dynamics (as opposed to geologic heterogeneity). The seismicity data imply that the effective area for OTF moment release, AE, depends on the thermal state of the fault but is otherwise independent of fault's average slip rate; i.e., AE ~ AT, where AT is the area above a reference isotherm. The data are consistent with β = 1/2 below an upper cutoff moment Mmax that increases with AT and yield the interesting scaling relation Amax ~ AT1/2. Taken together, the OTF

  17. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  18. Simulating Large-Scale Earthquake Dynamic Rupture Scenarios On Natural Fault Zones Using the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2014-05-01

    In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.

  19. Scaling behavior of the earthquake intertime distribution: influence of large shocks and time scales in the Omori law.

    PubMed

    Lippiello, Eugenio; Corral, Alvaro; Bottiglieri, Milena; Godano, Cataldo; de Arcangelis, Lucilla

    2012-12-01

    We present a study of the earthquake intertime distribution D(Δt) for a California catalog in temporal periods of short duration T. We compare experimental results with theoretical predictions and analytical approximate solutions. For the majority of intervals, rescaling intertimes by the average rate leads to collapse of the distributions D(Δt) on a universal curve, whose functional form is well fitted by a Gamma distribution. The remaining intervals, exhibiting a more complex D(Δt), are all characterized by the presence of large shocks. These results can be understood in terms of the relevance of the ratio between the characteristic time c in the Omori law and T: Intervals with Gamma-like behavior are indeed characterized by a vanishing c/T. The above features are also investigated by means of numerical simulations of the Epidemic Type Aftershock Sequence (ETAS) model. This study shows that collapse of D(Δt) is also observed in numerical catalogs; however, the fit with a Gamma distribution is possible only assuming that c depends on the main-shock magnitude m. This result confirms that the dependence of c on m, previously observed for m>6 main shocks, extends also to small m>2.

  20. From M8 to CyberShake: Using Large-Scale Numerical Simulations to Forecast Earthquake Ground Motions (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Cui, Y.; Olsen, K. B.; Graves, R. W.; Maechling, P. J.; Day, S. M.; Callaghan, S.; Milner, K.; Scec/Cme Collaboration

    2010-12-01

    Large earthquakes cannot be reliably and skillfully predicted in terms of their location, time, and magnitude. However, numerical simulations of seismic radiation from complex fault ruptures and wave propagation through 3D crustal structures have now advanced to the point where they can usefully predict the strong ground motions from anticipated earthquake sources. We describe a set of four computational pathways employed by the Southern California Earthquake Center (SCEC) to execute and validate these simulations. The methods are illustrated using the largest earthquakes anticipated on the southern San Andreas fault system. A dramatic example is the recent M8 dynamic-rupture simulation by Y. Cui, K. Olsen et al. (2010) of a magnitude-8 “wall-to-wall” earthquake on southern San Andreas fault, calculated to seismic frequencies of 2-Hz on a computational grid of 436 billion elements. M8 is the most ambitious earthquake simulation completed to date; the run took 24 hours on 223K cores of the NCCS Jaguar supercomputer, sustaining 220 teraflops. High-performance simulation capabilities have been implemented by SCEC in the CyberShake hazard model for the Los Angeles region. CyberShake computes over 400,000 earthquake simulations, managed through a scientific workflow system, to represent the probabilistic seismic hazard at a particular site up to seismic frequencies of 0.3 Hz. CyberShake shows substantial differences with conventional probabilistic seismic hazard analysis based on empirical ground-motion prediction. At the probability levels appropriate for long-term forecasting, these differences are most significant (and worrisome) in sedimentary basins, where the population is densest and the regional seismic risk is concentrated. The higher basin amplification obtained by CyberShake is due to the strong coupling between rupture directivity and basin-mode excitation. The simulations show that this coupling is enhanced by the tectonic branching structure of the San

  1. Large-scale numerical modeling of hydro-acoustic waves generated by tsunamigenic earthquakes

    NASA Astrophysics Data System (ADS)

    Cecioni, C.; Abdolali, A.; Bellotti, G.; Sammarco, P.

    2015-03-01

    Tsunamigenic fast movements of the seabed generate pressure waves in weakly compressible seawater, namely hydro-acoustic waves, which travel at the sound celerity in water (about 1500 m s-1). These waves travel much faster than the counterpart long free-surface gravity waves and contain significant information on the source. Measurement of hydro-acoustic waves can therefore anticipate the tsunami arrival and significantly improve the capability of tsunami early warning systems. In this paper a novel numerical model for reproduction of hydro-acoustic waves is applied to analyze the generation and propagation in real bathymetry of these pressure perturbations for two historical catastrophic earthquake scenarios in Mediterranean Sea. The model is based on the solution of a depth-integrated equation, and therefore results are computationally efficient in reconstructing the hydro-acoustic waves propagation scenarios.

  2. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years.

  3. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years. PMID:25156190

  4. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  5. Large-scale aseismic creep in the areas of the strong earthquakes revealed from the GRACE data on the time variations of the Earth's gravity field

    NASA Astrophysics Data System (ADS)

    Mikhailov, V. O.; Diament, M.; Lyubushin, A. A.; Timoshkina, E. P.; Khairetdinov, S. A.

    2016-09-01

    ruptured fault plane zone. The data demonstrating the increasing depth of the aftershocks since March 2007 and the approximately simultaneous change in the direction and average velocity of the horizontal surface displacements at the sites of the regional GPS network indicate that this earthquake induced postseismic displacements in a huge area extending to depths below 100 km. The total displacement since the beginning of the growth of the gravity anomaly up to July 2012 is estimated at 3.0 m in the upper part of the plate's contact and 1.5 m in the lower part up to a depth of 100 km. With allowance for the size of the region captured by the deformations, the released total energy is equivalent to the earthquake with the magnitude M w = 8.5. In our opinion, the growth of the gravity anomaly in these regions indicates a large-scale aseismic creep over the areas much more extensive than the source zone of the earthquake. These processes have not been previously revealed by the ground-based techniques. Hence, the time series of the GRACE gravity models are an important source of the new data about the locations and evolution of the locked segments of the subduction zones and their seismic potential.

  6. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  7. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  8. Large Rock Slope Failures Induced by Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Aydan, Ö.

    2016-06-01

    Recent earthquakes caused many large-scale rock slope failures. The scale and impact of rock slope failures are very large, and the form of failure differs depending upon the geological structures of slopes. First, the author briefly describes some model experiments to investigate the effects of shaking or faulting due to earthquakes on rock slopes. Then, fundamental characteristics of the rock slope failures induced by the earthquakes are described and evaluated according to some empirical and theoretical models. Furthermore, the observations for slope failures in relation to earthquake magnitude and epicenter or hypocenter distance were compared with several empirical relations available in the literature. Some of major rock slope failures induced by earthquakes are selected, and the post-failure motions are simulated and compared with observations. In addition, the effects of tsunamis on rock slopes in view of observations in the reconnaissances of the recent mega-earthquakes are explained and are discussed.

  9. Patterns of seismic activity preceding large earthquakes

    NASA Technical Reports Server (NTRS)

    Shaw, Bruce E.; Carlson, J. M.; Langer, J. S.

    1992-01-01

    A mechanical model of seismic faults is employed to investigate the seismic activities that occur prior to major events. The block-and-spring model dynamically generates a statistical distribution of smaller slipping events that precede large events, and the results satisfy the Gutenberg-Richter law. The scaling behavior during a loading cycle suggests small but systematic variations in space and time with maximum activity acceleration near the future epicenter. Activity patterns inferred from data on seismicity in California demonstrate a regional aspect; increased activity in certain areas are found to precede major earthquake events. One example is given regarding the Loma Prieta earthquake of 1989 which is located near a fault section associated with increased activity levels.

  10. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  11. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  12. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  13. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  14. Sea-level changes before large earthquakes

    USGS Publications Warehouse

    Wyss, M.

    1978-01-01

    Changes in sea level have long been used as a measure of local uplift and subsidence associated with large earthquakes. For instance, in 1835, the British naturalist Charles Darwin observed that sea level dropped by 2.7 meters during the large earthquake in Concepcion, CHile. From this piece of evidence and the terraces along the beach that he saw, Darwin concluded that the Andes had grown to their present height through earthquakes. Much more recently, George Plafker and James C. Savage of the U.S Geological Survey have shown, from barnacle lines, that the great 1960 Chile and the 1964 Alaska earthquakes caused several meters of vertical displacement of the shoreline. 

  15. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  16. Basics for Testing Large Earthquake Precursors

    NASA Astrophysics Data System (ADS)

    Romashkova, L. L.; Kossobokov, V. G.; Peresan, A.

    2008-12-01

    Earthquakes, the large or significant ones in particular, are extreme events that, by definition, are the rare ones. Testing candidates to large earthquake precursors implies investigation a small sample of case- histories with the support of specific and sensitive statistical methods and data of different quality, collected in various conditions. Regretfully, in many seismological studies the methods of mathematical statistics are used outside their applicability: earthquakes are evidently not independent events and have heterogeneous, perhaps fractal distribution in space and time. Moreover, the naïve or, conversely, delicately-designed models are considered as a full replacement of seismic phenomena. Although there are many claims of earthquake precursors, most of them should remain in the list of precursor candidates, which have never been tested in any rigorous way, and, in fact, are anecdotal cases of coincidental occurrence. To establish a precursory link between sequences of events of the same or different phenomena, it is necessary to accumulate enough statistics in a rigorous forecast/prediction test, which results, i.e. success-to-failure scores and space-time volume of alarms, must appeal for rejecting hypotheses of random coincidental appearance. We reiterate suggesting to use so-called "Seismic Roulette" null-hypothesis as the most adequate random alternative accounting for the empirical spatial distribution of earthquakes in question and illustrate a few outcomes of Testing Large Earthquake Precursors.

  17. Afterslip and Viscoelastic Relaxation Model Inferred from the Large Scale Postseismic Deformation Following the 2010 Mw 8,8 Maule Earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Vigny, C.; Klein, E.; Fleitout, L.; Garaud, J. D.

    2015-12-01

    Postseismic deformation following the large subduction earthquake of Maule (Chile, Mw8.8, February 27th 2010) have been closely monitored with GPS from 70 km up to 2000 km away from the trench. They exhibit a behavior generally similar to that already observed after the Aceh and Tohoku-Oki earthquakes. Vertical uplift is observed on the volcanic arc and a moderate large scale subsidence is associated with sizeable horizontal deformation in the far-field (500-2000km from the trench). In addition, near-field data (70-200km from the trench) feature a rather complex deformation pattern. A 3D FE code (Zebulon Zset) is used to relate these deformation to slip on the plate interface and relaxation in the mantle. The mesh features a spherical shell-portion from the core-mantle boundary to the Earth's surface, extending over more than 60 degrees in latitude and longitude. The overridding and subducting plates are elastic, and the asthenosphere is viscoelastic. A viscoelastic Low Viscosity Channel (LVC) is also introduced along the plate interface. Both the asthenosphere and the channel feature Burger's rheologies and we invert for their mechanical properties and geometrical characteristics simultaneously with the afterslip distribution. The horizontal deformation pattern requires relaxation both in i) the asthenosphere extending down to 270km, with a 'long-term' viscosity of the order of 4.8.1018 Pa.s and ii) in the channel, that has to extend from depth of 50 to 150 km with viscosities slightly below 1018 Pa.s, to fit well the vertical velocity pattern (intense and quick uplift over the Cordillera). Aseismic slip on the plate interface, at shallow depth, is necessary to explain all the characteristics of the near-field displacements. We then detect two main patches of high slip, one updip of the coseismic slip distribution in the northernmost part of the rupture zone, and the other one downdip, at the latitude of Constitucion (35°S). We finally study the temporel

  18. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn; Borchardt, Glenn; Hirschfeld, Sue E.; Lienkaemper, James J.; McClellan, Patrick H.; Williams, Patrick L.; Wong, Ivan G.

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  19. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  20. Scaling of seismic memory with earthquake size

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene

    2012-07-01

    It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.

  1. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  2. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  3. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  4. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  5. Rapid Characterization of Large Earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Barrientos, S. E.; Team, C.

    2015-12-01

    Chile, along 3000 km of it 4200 km long coast, is regularly affected by very large earthquakes (up to magnitude 9.5) resulting from the convergence and subduction of the Nazca plate beneath the South American plate. These megathrust earthquakes exhibit long rupture regions reaching several hundreds of km with fault displacements of several tens of meters. Minimum delay characterization of these giant events to establish their rupture extent and slip distribution is of the utmost importance for rapid estimations of the shaking area and their corresponding tsunami-genic potential evaluation, particularly when there are only few minutes to warn the coastal population for immediate actions. The task of a rapid evaluation of large earthquakes is accomplished in Chile through a network of sensors being implemented by the National Seismological Center of the University of Chile. The network is mainly composed approximately by one hundred broad-band and strong motion instruments and 130 GNSS devices; all will be connected in real time. Forty units present an optional RTX capability, where satellite orbits and clock corrections are sent to the field device producing a 1-Hz stream at 4-cm level. Tests are being conducted to stream the real-time raw data to be later processed at the central facility. Hypocentral locations and magnitudes are estimated after few minutes by automatic processing software based on wave arrival; for magnitudes less than 7.0 the rapid estimation works within acceptable bounds. For larger events, we are currently developing automatic detectors and amplitude estimators of displacement coming out from the real time GNSS streams. This software has been tested for several cases showing that, for plate interface events, the minimum magnitude threshold detectability reaches values within 6.2 and 6.5 (1-2 cm coastal displacement), providing an excellent tool for earthquake early characterization from a tsunamigenic perspective.

  6. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  7. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  8. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  9. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  10. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  11. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  12. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  13. Large Earthquake Potential in the Southeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Mora-Paez, H.; Bilham, R. G.; Lafemina, P.; Mattioli, G. S.; Molnar, P. H.; Audemard, F. A.; Perez, O. J.

    2015-12-01

    The axis of rotation describing relative motion of the Caribbean plate with respect to South America lies in Canada near Hudson's Bay, such that the Caribbean plate moves nearly due east relative to South America [DeMets et al. 2010]. The plate motion is absorbed largely by pure strike slip motion along the El Pilar Fault in northeastern Venezuela, but in northwestern Venezuela and northeastern Colombia, the relative motion is distributed over a wide zone that extends from offshore to the northeasterly trending Mérida Andes, with the resolved component of convergence between the Caribbean and South American plates estimated at ~10 mm/yr. Recent densification of GPS networks through COLOVEN and COCONet including access to private GPS data maintained by Colombia and Venezuela allowed the development of a new GPS velocity field. The velocity field, processed with JPL's GOA 6.2, JPL non-fiducial final orbit and clock products and VMF tropospheric products, includes over 120 continuous and campaign stations. This new velocity field along with enhanced seismic reflection profiles, and earthquake location analysis strongly suggest the existence of an active oblique subduction zone. We have also been able to use broadband data from Venezuela to search slow-slip events as an indicator of an active subduction zone. There are caveats to this hypothesis, however, including the absence of volcanism that is typically concurrent with active subduction zones and a weak historical record of great earthquakes. A single tsunami deposit dated at 1500 years before present has been identified on the southeast Yucatan peninsula. Our simulations indicate its probable origin is within our study area. We present a new GPS-derived velocity field, which has been used to improve a regional block model [based on Mora and LaFemina, 2009-2012] and discuss the earthquake and tsunami hazards implied by this model. Based on the new geodetic constraints and our updated block model, if part of the

  14. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  15. Time-Dependent Earthquake Forecasts on a Global Scale

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Graves, W. R.

    2014-12-01

    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged ("Poisson") forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. An example is the Working Group on California Earthquake Probabilities, which has been using fault-based models to compute conditional probabilities in California since 1988. An example of a forecast is the Epidemic-Type Aftershock Sequence (ETAS), which is based on the Gutenberg-Richter (GR) magnitude-frequency law, the Omori aftershock law, and Poisson statistics. The method discussed in this talk is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as "markers" for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous research, we used conditional Weibull statistics to convert event counts into a temporal probability for a given fixed region. In the present paper, we move belyond a fixed region, and develop a method to compute these Natural Time Weibull (NTW) forecasts on a global scale, using an internally consistent method, in regions of arbitrary shape and size. We develop and implement these methods on a modern web-service computing platform, which can be found at www.openhazards.com and www.quakesim.org. We also discuss constraints on the User Interface (UI) that follow from practical considerations of site usability.

  16. Absence of remotely triggered large earthquakes beyond the mainshock region

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2011-01-01

    Large earthquakes are known to trigger earthquakes elsewhere. Damaging large aftershocks occur close to the mainshock and microearthquakes are triggered by passing seismic waves at significant distances from the mainshock. It is unclear, however, whether bigger, more damaging earthquakes are routinely triggered at distances far from the mainshock, heightening the global seismic hazard after every large earthquake. Here we assemble a catalogue of all possible earthquakes greater than M 5 that might have been triggered by every M 7 or larger mainshock during the past 30 years. We compare the timing of earthquakes greater than M 5 with the temporal and spatial passage of surface waves generated by large earthquakes using a complete worldwide catalogue. Whereas small earthquakes are triggered immediately during the passage of surface waves at all spatial ranges, we find no significant temporal association between surface-wave arrivals and larger earthquakes. We observe a significant increase in the rate of seismic activity at distances confined to within two to three rupture lengths of the mainshock. Thus, we conclude that the regional hazard of larger earthquakes is increased after a mainshock, but the global hazard is not.

  17. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes.

  18. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes. PMID:17738277

  19. Evidence for a difference in rupture initiation between small and large earthquakes.

    PubMed

    Colombelli, S; Zollo, A; Festa, G; Picozzi, M

    2014-01-01

    The process of earthquake rupture nucleation and propagation has been investigated through laboratory experiments and theoretical modelling, but a limited number of observations exist at the scale of earthquake fault zones. Distinct models have been proposed, and whether the magnitude can be predicted while the rupture is ongoing represents an unsolved question. Here we show that the evolution of P-wave peak displacement with time is informative regarding the early stage of the rupture process and can be used as a proxy for the final size of the rupture. For the analysed earthquake set, we found a rapid initial increase of the peak displacement for small events and a slower growth for large earthquakes. Our results indicate that earthquakes occurring in a region with a large critical slip distance have a greater likelihood of growing into a large rupture than those originating in a region with a smaller slip-weakening distance. PMID:24887597

  20. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  1. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  2. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  3. Can observations of earthquake scaling constrain slip weakening?

    NASA Astrophysics Data System (ADS)

    Abercrombie, Rachel E.; Rice, James R.

    2005-08-01

    We use observations of earthquake source parameters over a wide magnitude range (MW~ 0-7) to place constraints on constitutive fault weakening. The data suggest a scale dependence of apparent stress and stress drop; both may increase slightly with earthquake size. We show that this scale dependence need not imply any difference in fault zone properties for different sized earthquakes. We select 30 earthquakes well-recorded at 2.5 km depth at Cajon Pass, California. We use individual and empirical Green's function spectral analysis to improve the resolution of source parameters, including static stress drop (Δσ) and total slip (S). We also measure radiated energy ES. We compare the Cajon Pass results with those from larger California earthquakes including aftershocks of the 1994 Northridge earthquake and confirm the results of Abercrombie (1995): μES/M0<<Δσ (where μ= rigidity) and both ES/M0 and Δσ increase as M0 (and S) increases. Uncertainties remain large due to model assumptions and variations between possible models, and earthquake scale independence is possible within the resolution. Assuming that the average trends are real, we define a quantity G'= (Δσ- 2μES/M0)S/2 which is the total energy dissipation in friction and fracture minus σ1S, where σ1 is the final static stress. If σ1=σd, the dynamic shear strength during the last increments of seismic slip, then G'=G, the fracture energy in a slip-weakening interpretation of dissipation. We find that G' increases with S, from ~103 J m-2 at S= 1 mm (M1 earthquakes) to 106-107 J m-2 at S= 1 m (M6). We tentatively interpret these results within slip-weakening theory, assuming G'~G. We consider the common assumption of a linear decrease of strength from the yield stress (σp) with slip (s), up to a slip Dc. In this case, if either Dc, or more generally (σp-σd) Dc, increases with the final slip S we can match the observations, but this implies the unlikely result that the early weakening behaviour of

  4. International Technical Communication after a Large Earthquake.

    ERIC Educational Resources Information Center

    Klein, Fred

    1994-01-01

    Discusses, in the context of southern California's severe earthquake in January 1994, attitudes to technology and the information superhighway. Argues that technology should not be worshipped as a solution. (SR)

  5. A Large Scale Automatic Earthquake Location Catalog in the San Jacinto Fault Zone Area Using An Improved Shear-Wave Detection Algorithm

    NASA Astrophysics Data System (ADS)

    White, M. C. A.; Ross, Z.; Vernon, F.; Ben-Zion, Y.

    2015-12-01

    UC San Diego's ANZA network began archiving event-triggered data in 1982. As a result of improved recording technology, continuous waveform data archives are available starting in 1998. This continuous dataset, from 1998-present, represents a wealth of potential insight into spatio-temporal seismicity patterns, earthquake physics and mechanics of the San Jacinto Fault Zone. However, the volume of data renders manual analysis costly. In order to investigate the characteristics of the data in space and time, an automatic earthquake location catalog is needed. To this end, we apply standard earthquake signal processing techniques to the continuous data to detect first-arriving P-waves in combination with a recently developed S-wave detection algorithm. The resulting dataset of arrival time observations are processed using a grid association algorithm to produce initial absolute locations which are refined using a location inversion method that accounts for 3-D velocity heterogeneities. Precise relative locations are then derived from the refined absolute locations using the HypoDD double-difference algorithm. Moment magnitudes for the events are estimated from multi-taper spectral analysis. A >650% increase in the S:P pick ratio is achieved using the updated S-wave detection algorithm, when compared to the currently available catalog for the ANZA network. The increased number of S-wave observations leads to improved earthquake location accuracy and reliability (ie. less false event detections). Various aspects of spatio-temporal seismicity patterns and size distributions are investigated. Updated results will be presented at the meeting.

  6. Large earthquake rupture process variations on the Middle America megathrust

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  7. The foreshock sequence of large earthquakes: slow slip or cascade triggering?

    NASA Astrophysics Data System (ADS)

    Huang, H.; Meng, L.

    2014-12-01

    Large earthquakes such as the 2011 Mw 9.0 Tohoku-Oki earthquake and the 2014 Mw 8.1 Iquique earthquake are often preceded by foreshock sequences migrating toward the hypocenters of mainshocks. Understanding the underlying physical processes is crucial for imminent seismic hazard assessment. Some of these foreshock sequences are accompanied by repeating earthquakes, which are thought to be a manifestation of a large-scale background slow slip transient. The alternative interpretation is that the migrating seismicity is simply produced by the cascade triggering of mainshock-aftershock sequences following Omori's Law. In this case the repeating earthquakes are driven by the afterslip of the moderate to large foreshocks instead of an independent slow slip event. As an initial effort to discriminate these two hypotheses, we made a detailed analysis of the repeating earthquakes among the foreshock sequences of the 2014 Mw 8.1 Iquique earthquake. We observed that some significant foreshocks (M >= 5.5) are followed by the rapid occurrences of local repeaters, suggesting the contribution of afterslip. However the repeaters are distributed in a wide area (~40*80 km), which are difficult to be driven by only a few moderate to large foreshocks. Furthermore, the estimated repeater-inferred aseismic moment during the foreshock period is at least 3.041e19 Nm (5*5 km grid), which is of the same order with the total amount of seismic moment of all foreshocks (2.251e19 Nm). This comparison again supports the slow-slip model since the ratio of post-seismic to coseismic moment is small in most earthquakes. To estimate the contributions of the transient slow slip and cascade triggering in the initiation of large earthquakes, we propose to systematically search and analyze repeating earthquakes in all foreshock sequences preceding large earthquakes. The next effort will be made to the long precursory phase of large interplate earthquakes such as the 1999 Mw 7.6 Izimit earthquake and the

  8. Detection capability of global earthquakes influenced by large intermediate-depth and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2011-12-01

    This study examined the detection capability of the global CMT catalogue immediately after a large intermediate-depth (70 < depth ≤ 300 km) or deep (300 km < depth) earthquake. Iwata [2008, GJI] have revealed that the detection capability is remarkably lower than ordinary one for several hours after the occurrence of a large shallow (depth ≤ 70 km) earthquake. Since the global CMT catalogue plays an important role in studies on global earthquake forecasting or seismicity pattern [e.g., Kagan and Jackson, 2010, Pageoph], the characteristic of the catalogue should be investigated carefully. We stacked global shallow earthquake sequences, which are taken from the global CMT catalogue from 1977 to 2010, after a large intermediate-depth or deep earthquake. Then, we utilized a statistical model representing an observed magnitude-frequency distribution of earthquakes [e.g., Ringdal, 1975, BSSA; Ogata and Katsura, 1993, GJI]. The applied model is a product of the Gutenberg-Richter law and a detection rate function q(M). Following previous studies, the cumulative distribution of the normal distribution was used as q(M). This model enables us to estimate μ, the magnitude where the detection rate of earthquake is 50 per cent. Finally, a Bayesian approach with a piecewise linear approximation [Iwata, 2008, GJI] was applied to this stacked data to estimate the temporal change of μ. Consequently, we found a significantly lowered detection capability after a intermediate-depth or deep earthquake of which magnitude is 6.5 or larger. The lowered detection capability lasts for several hours or one-half day. During this period of low detection capability, a few per cent of M ≥ 6.0 earthquakes or a few tens percent of M ≥ 5.5 earthquakes are undetected in the global CMT catalogue while the magnitude completeness threshold of the catalogue was estimated to be around 5.5 [e.g., Kagan, 2003, PEPI].

  9. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  10. The velocity effects of large historical earthquakes in Chinese mainland

    NASA Astrophysics Data System (ADS)

    Tan, Weijie; Dong, Danan; Wu, Bin

    2016-04-01

    Accompanying with the collision between Indian and Eurasian plates, China has experienced decadal large earthquakes over the past 100 years. These large earthquakes are mainly located along several seismic belts in Tien Shan, Tibet Plateau, and Northern China. The postseismic deformation and stress accumulation induced by the historical earthquakes is important for assess the contemporary seismic hazards. The postseismic deformation induced by historical large earthquakes also influences the observed present day velocity field. The relaxation of the viscoelastic asthenosphere is modeled on a layered spherically symmetric earth with Maxwell rheology. The layer's thickness, the density p and the P-wave velocity Vp are from PREM. The shear modulus are derived from the p and Vp. The viscosity between lower crust and upper mantle adopted in this study is 1×1019 Pa.s. Viscoelastic relaxation contributions due to 34 historical large earthquakes in China from 1900 to 2001 are calculated using VISCO1D-v3 program developed by Pollitz (1997). We calculated the model predicted velocity field in 2015 in China caused by historical big earthquakes. The pattern of predicted velocity field is consistent with the present movement of crust, with peak velocities reaching 6mm yr‑1. The region of Southwestern China moves northeastwards, and also a significant rotation occurred at the edge of the Tibetan Plateau. The velocity field caused by historical large earthquakes provides a base to isolate the velocity field caused by the contemporary tectonic movement from the geodetic observations. It also provides critical information to investigate the regional stress accumulation and to assess the mid-term to long-term earthquake risk.

  11. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  12. Deeper penetration of large earthquakes on seismically quiescent faults.

    PubMed

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  13. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  14. Deeper penetration of large earthquakes on seismically quiescent faults.

    PubMed

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard. PMID:27284188

  15. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  16. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-07-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  17. Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults

    USGS Publications Warehouse

    Harris, R.A.

    2004-01-01

    Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.

  18. Power-law time distribution of large earthquakes.

    PubMed

    Mega, Mirko S; Allegrini, Paolo; Grigolini, Paolo; Latora, Vito; Palatella, Luigi; Rapisarda, Andrea; Vinciguerra, Sergio

    2003-05-01

    We study the statistical properties of time distribution of seismicity in California by means of a new method of analysis, the diffusion entropy. We find that the distribution of time intervals between a large earthquake (the main shock of a given seismic sequence) and the next one does not obey Poisson statistics, as assumed by the current models. We prove that this distribution is an inverse power law with an exponent mu=2.06+/-0.01. We propose the long-range model, reproducing the main properties of the diffusion entropy and describing the seismic triggering mechanisms induced by large earthquakes.

  19. An earthquake strength scale for the media and the public

    USGS Publications Warehouse

    Johnston, A.C.

    1990-01-01

    A local engineer, E.P Hailey, pointed this problem out to me shortly after the Loma Prieta earthquake. He felt that three problems limited the usefulness of magnitude in describing an earthquake to the public; (1) most people don't understand that it is not a linear scale; (2) of those who do realized the scale is not linear, very few understand the difference of a factor of ten in ground motion and 32 in energy release between points on the scale; and (3) even those who understand the first two points have trouble putting a given magnitude value into terms they can relate to. In summary, Mr. Hailey wondered why seismologists can't come up with an earthquake scale that doesn't confuse everyone and that conveys a sense of true relative size. Here, then, is m attempt to construct such a scale

  20. Large Earthquakes in Developing Countries: Estimating and Reducing their Consequences

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2003-12-01

    Recent efforts to reduce the risk of earthquakes in developing countries have been diverse, earnest, and inadequate. The earthquake risk in developing countries is large and growing rapidly. It is largely ignored. Unless something is done - quickly - to reduce it, both developing and developed countries will suffer human and economic losses far greater than have been experienced in the past. GeoHazards International (GHI) is a nonprofit organization that has attempted to reduce the death and suffering caused by earthquakes in the world's most vulnerable communities, through preparedness, mitigation and prevention. Its approach has included raising awareness, strengthening local institutions and launching mitigation activities, particularly for schools. GHI and its partners around the world have achieved some success: thousands of school children are safer, hundreds of cities are aware of their risk, tens of cities have been assessed and advised, and some local organizations have been strengthened. But there is disturbing evidence that what is being done is insufficient. The problem outpaces the cure. A new program is now being considered that would attempt to improve earthquake-resistant construction of schools, internationally, by publicizing well-managed programs around the world that design, construct and maintain earthquake-resistant schools. While focused on schools, this program might have broader applications in the future.

  1. Scaling of earthquake models with inhomogeneous stress dissipation.

    PubMed

    Dominguez, Rachele; Tiampo, Kristy; Serino, C A; Klein, W

    2013-02-01

    Natural earthquake fault systems are highly nonhomogeneous. The inhomogeneities occur because the earth is made of a variety of materials which hold and dissipate stress differently. In this work, we study scaling in earthquake fault models which are variations of the Olami-Feder-Christensen and Rundle-Jackson-Brown models. We use the scaling to explore the effect of spatial inhomogeneities due to damage and inhomogeneous stress dissipation in the earthquake-fault-like systems when the stress transfer range is long, but not necessarily longer than the length scale associated with the inhomogeneities of the system. We find that the scaling depends not only on the amount of damage, but also on the spatial distribution of that damage.

  2. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  3. Geologic evidence for recurrent moderate to large earthquakes near Charleston, South Carolina

    USGS Publications Warehouse

    Obermeier, S.F.; Gohn, G.S.; Weems, R.E.; Gelinas, R.L.; Rubin, M.

    1985-01-01

    Multiple generations of earthquake-induced sand blows in Quaternary sediments and soils near Charleston, South Carolina, are evidence of recurrent moderate to large earthquakes in that area. The large 1886 earthquake, the only historic earthquake known to have produced sand blows at Charleston, probably caused the youngest observed blows. Older (late Quaternary) sand blows in the Charleston area indicate at least two prehistoric earthquakes with shaking severities comparable to the 1886 event.

  4. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  5. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: State of Gujarat, India

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Kossobokov, Vladimir; Parvez, Imtiyaz

    2016-04-01

    The Gujarat state of India is one of the most seismically active intercontinental regions of the world. Historically, it has experienced many damaging earthquakes including the devastating 1819 Rann of Kutch and 2001 Bhuj earthquakes. The effect of the later one is grossly underestimated by the Global Seismic Hazard Assessment Program (GSHAP). To assess a more adequate earthquake hazard for the state of Gujarat, we apply Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter recurrence relation taking into account naturally fractal distribution of earthquake loci. USLE has evident implications since any estimate of seismic hazard depends on the size of the territory considered and, therefore, may differ dramatically from the actual one when scaled down to the proportion of the area of interest (e.g. of a city) from the enveloping area of investigation. We cross compare the seismic hazard maps compiled for the same standard regular grid 0.2°×0.2° (i) in terms of design ground acceleration (DGA) based on the neo-deterministic approach, (ii) in terms of probabilistic exceedance of peak ground acceleration (PGA) by GSHAP, and (iii) the one resulted from the USLE application. Finally, we present the maps of seismic risks for the state of Gujarat integrating the obtained seismic hazard, population density based on 2011 census data, and a few model assumptions of vulnerability.

  6. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  7. Afterslip Distribution of Large Earthquakes Using Viscoelastic Media

    NASA Astrophysics Data System (ADS)

    Sato, T.; Higuchi, H.

    2009-12-01

    One of important parameters in simulations of earthquake generation is frictional properties of faults. To investigate the frictional properties, many authors studied coseismic slip and afterslip distribution of large plate interface earthquakes using coseismic and post seismic surface deformation by GPS data. Most of these studies used elastic media to get afterslip distribution. However, the effect of viscoelastic relaxation at the asthenosphere is important on post seismic surface deformation (Matsu’ura and Sato, GJI, 1989; Sato and Matsu’ura, GJI, 1992). Therefore, the studies using elastic media did not estimate correct afterslip distribution because they forced the cause of surface deformation on only afterslips at plate interface. We estimate afterslip distribution of large interplate earthquakes using viscoelastic media. We consider not only viscoelastic responses of coseismic slip but also viscoelastic responses of afterslips. Because many studies suggested that the magnitude of afterslips was comparable to that of coseismic slip, viscoelastic responses of afterslips can not be negligible. Therefore, surface displacement data include viscoelastic response of coseismic slip, viscoelastic response of afterslips which occurred just after coseismic period to just before the present, and elastic response of the present afterslip. We estimate afterslip distribution for the 2003 Tokachi-oki earthquake, Hokkaido, Japan using GPS data by GSI, Japan. We use CAMP model (Hashimoto et al, PAGEOPH, 2004) as a plate interface between the Pacific plate and the North American plate. The viscoelastic results show clearer that afterslips distribute on areaes where the coseismic slip does not occur. The viscoelastic results also show that the afterslips concentrate deeper parts of the plate interface at the eastern adjoining area of the 2003 Tokachi-oki earthquake.

  8. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  9. Probabilities of large earthquakes in the San Francisco Bay region, California

    SciTech Connect

    Not Available

    1990-01-01

    This book evaluates long-term probabilities of large earthquakes (magnitude 7 or greater) in the San Francisco Bay region by identifying fault segments expected to produce large earthquakes and then estimating the time to the next earthquake on each segment. The probability of one or more large earthquakes in the region in the coming 30 years is estimated at 67 percent. This report contains detailed, technical descriptions of the data and methods used to derive the estimates.

  10. Scale-free networks of earthquakes and aftershocks.

    PubMed

    Baiesi, Marco; Paczuski, Maya

    2004-06-01

    We propose a metric to quantify correlations between earthquakes. The metric consists of a product involving the time interval and spatial distance between two events, as well as the magnitude of the first one. According to this metric, events typically are strongly correlated to only one or a few preceding ones. Thus a classification of events as foreshocks, main shocks, or aftershocks emerges automatically without imposing predetermined space-time windows. In the simplest network construction, each earthquake receives an incoming link from its most correlated predecessor. The number of aftershocks for any event, identified by its outgoing links, is found to be scale free with exponent gamma=2.0(1). The original Omori law with p=1 emerges as a robust feature of seismicity, holding up to years even for aftershock sequences initiated by intermediate magnitude events. The broad distribution of distances between earthquakes and their linked aftershocks suggests that aftershock collection with fixed space windows is not appropriate.

  11. Investigating earthquake scaling relationships from a 15 year archive of InSAR-derived earthquake models

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Ferreira, A. M.; Parsons, B. E.

    2008-12-01

    In the 15 years since the first InSAR study of the 1992 Landers earthquake, the first event to be studied using InSAR, over 50 events have been studied wholly or jointly using InSAR. This constitutes a rich archive of published studies that can be mined for information on earthquake phenomenology. Empirical earthquake scaling relationships, as can be inferred from estimates of fault dimensions, slip and moment for multiple earthquakes, are extensively used in seismic hazard forecasting, and also constitute a means of placing constraints on the bulk mechanical behaviour of the seismogenic upper crust. As a source of such data, studies that utilise information from InSAR have an advantage over seismic methods in that in many cases, a key parameter, the fault length, can be measured directly from the observations. In addition, in cases of good interferogram coherence, the high spatial density of surface deformation observations that InSAR affords can place tight constraints on fault width and other important parameters. We present here a preliminary survey of earthquake scaling relationships as supported by the existing archive of InSAR earthquake studies. We find that for events with Mw > 6, the data support moment scaling with the square of fault length, in keeping with the studies of Scholz and others, and imply proportionality between fault average slip and fault length. There are currently too few datapoints for great earthquakes (Mw > 8) to assess any proposed change in scaling for such events. Scatterplots of average slip versus fault length show two broad fields -- an area of high slip-to-length ratios (> 2 × 10-5) which are predominantly associated with faults with low long-term slip rates, predominantly from intraplate settings, and an area of lower slip-to-length ratios (< 2 × 10-5) which typically are larger events from faults with higher long-term slip rates (e.g. the North Anatolian and Kunlun faults, and the Peru-Chile subduction zone). In addition

  12. Low-frequency source parameters of twelve large earthquakes

    NASA Astrophysics Data System (ADS)

    Harabaglia, Paolo

    1993-06-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  13. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  14. Earthquake Clustering and Triggering of Large Events in Simulated Catalogs

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Dieterich, J. H.; Richards-Dinger, K. B.; Xu, H.

    2013-12-01

    We investigate large event clusters (e.g. earthquake doublets and triplets) wherein secondary events in a cluster are triggered by stress transfer from previous events. We employ the 3D boundary element code RSQSim with a California fault model to generate synthetic catalogs spanning from tens of thousands up to a million years. The simulations incorporate rate-state fault constitutive properties, and the catalogs include foreshocks, aftershocks and occasional clusters of large events. Here we define a large event cluster as two or more M≥7 events within a few years. Most clustered events are closely grouped in space as well as time. Large event clusters show highly productive aftershock sequences where the aftershock locations of the first event in a cluster appear to correlate with the location of the next large event in the cluster. We find that the aftershock productivity of the first events in large event clusters is roughly double that of the unrelated, non-clustered events and that aftershock rate is a proxy for the stress state of the faults. The aftershocks of the first event in a large-event cluster migrate toward the point of nucleation of the next event in a large-event cluster. Furthermore, following a normal aftershock sequence, the average event rate increases prior to the second event in a large-event cluster. These increased event rates prior to the second event in a cluster follow an inverse Omori's law, which is characteristic of foreshocks. Clustering probabilities based on aftershock rates are higher than expected from Omori aftershock and Gutenberg-Richter magnitude frequency laws, which suggests that the high aftershock rates indicate near-critical stresses for failure in a large earthquake.

  15. Premonitory patterns of seismicity months before a large earthquake: Five case histories in Southern California

    PubMed Central

    Keilis-Borok, V. I.; Shebalin, P. N.; Zaliapin, I. V.

    2002-01-01

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with “intermediate” lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1–7 months five large (M ≥ 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns. PMID:12482945

  16. Earthquake Apparent Stress Scaling for the 1999 Hector Mine Sequence

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.

    2003-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of studies finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Other studies find the apparent stress increases with magnitude (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We try to improve upon earlier results by using consistent techniques over common paths for a wide range of sizes and seismic phases. We have examined about 130 earthquakes from the Hector Mine earthquake sequence in Southern California. These earthquakes range in size from the October 16,1999 Mw=7.1 mainshock down to ML=3.0 aftershocks into 2000. The mainshock has unclipped Pg and Lg phases at a number of high quality regional stations (e.g. CMB, ELK, TUC) where we can use the common path to examine apparent stress scaling relations directly. We are careful to avoid any event selection bias that would be related to apparent stress values. We fix each stations path correction using the independent moment and energy estimates for the mainshock. We then use those corrections to determine the seismic energy for each event based on regional Lg spectra. We use a modeling technique (MDAC) based on a modified Brune (1970) spectral shape but without any assumptions of corner-frequency scaling (Walter and Taylor, 2002). We perform similar analysis using the Pg spectra. We find the energy estimates for the same events are consistent for Lg estimates, Pg estimates and the estimates using the independent regional coda envelope technique (Mayeda and Walter, 1996; Mayeda et al

  17. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  18. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  19. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  20. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  1. How large is the fault slip at trench in the M=9 Tohoku-oki earthquake?

    NASA Astrophysics Data System (ADS)

    Wang, Kelin; Sun, Tianhaozhe; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2015-04-01

    It is widely known that coseismic slip breached the trench during the 2011 Mw=9 Tohoku-oki earthquake, responsible for generating a devastating tsunami. For understanding both the mechanics of megathrust rupture and the mechanism of tsunami generation, it is important to know how much fault slip actually occurred at the trench. But the answer has remained elusive because most of the data from this earthquake do not provide adequate near-trench resolution. Seafloor GPS sites were located > 30 km from the trench. Near-trench seafloor pressure records suffered from complex vertical deformation at local scales. Seismic inversion does not have adequate accuracy at the trench. Inversion of tsunami data is highly dependent on the parameterization of the fault near the trench. The severity of the issue is demonstrated by our compilation of rupture models for this earthquake published by ~40 research groups using multiple sets of coseismic observations. In the peak slip area, fault slip at the trench depicted by these models ranges from zero to >90 m. The faults in many models do not reach the trench because of simplification of fault geometry. In this study, we use high-resolution differential bathymetry, that is, bathymetric differences before and after the earthquake, to constrain coseismic slip at and near the trench along a corridor in the area of largest moment release. We use a 3D elastic finite element model including real fault geometry and surface topography to produce Synthetic Differential Bathymetry (SDB) and compare it with the observed differential bathymetry. Earthquakes induce bathymetric changes by shifting the sloping seafloor seaward and by warping the seafloor through internal deformation of rocks. These effects are simulated by our SDB modeling, except for the permanent formation of the upper plate which is like to be limited and localized. Bathymetry data were collected by JAMSTEC in 1999, 2004, and in 2011 right after the M=9 earthquake. Our SDB

  2. Access Time of Emergency Vehicles Under the Condition of Street Blockages after a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Hirokawa, N.; Osaragi, T.

    2016-09-01

    The previous studies have been carried out on accessibility in daily life. However it is an important issue to improve the accessibility of emergency vehicles after a large earthquake. In this paper, we analyzed the accessibility of firefighters by using a microscopic simulation model immediately after a large earthquake. More specifically, we constructed the simulation model, which describes the property damage, such as collapsed buildings, street blockages, outbreaks of fires, and fire spreading, and the movement of firefighters from fire stations to the locations of fires in a large-scale earthquake. Using this model, we analyzed the influence of the street-blockage on the access time of firefighters. In case streets are blocked according to property damage simulation, the result showed the average access time is more than 10 minutes in the outskirts of the 23 wards of Tokyo, and there are some firefighters arrive over 20 minutes at most. Additionally, we focused on the alternative routes and proposed that volunteers collect information on street blockages to improve the accessibility of firefighters. Finally we demonstrated that access time of firefighters can be reduced to the same level as the case no streets were blocked if 0.3% of residents collected information in 10 minutes.

  3. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  4. Using DART-recorded Rayleigh waves for rapid CMT and finite fault analyses of large megathrust earthquakes.

    NASA Astrophysics Data System (ADS)

    Thio, H. K.; Polet, J.; Ryan, K. J.

    2015-12-01

    We study the use of long-period Rayleigh waves recorded by the DART-type ocean bottom pressure sensors. The determination of accurate moment and slip distribution after a megathrust subduction zone earthquake is essential for tsunami early warning. The two main reasons why the DART data are o interest to this problem are; 1 - contrary to the broadband data used in the early stages of earthquake analysis, the DART data do not saturate for large magnitude earthquakes, and 2 - DART stations are located offshore and thus often fill gaps in the instrumental coverage at local and regional distances. Thus, by including DART recorded Rayleigh waves into the rapid response systems we may be able to gain valuable time in determining accurate moment estimates and slip distributions needed for tsunami warning and other rapid response products. Large megathrust earthquakes are among the most destructive natural disasters in history but also pose a significant challenge real-time analysis. The scales involved in such large earthquakes, with ruptures as long as a thousand kilometers and durations of several minutes are formidable. There are still issues with rapid analysis at the short timescales, such as minutes after the event since many of the nearby seismic stations will saturate due to the large ground motions. Also, on the seaward side of megathrust earthquakes, the nearest seismic stations are often thousands of kilometers away on oceanic islands. The deployment of DART buoys can fill this gap, since these instruments do not saturate and are located close in on the seaward side of the megathrusts. We are evaluating the use of DART-recorded Rayleigh waves, by including them in the dataset used for Centroid Moment Tensor analyses, and by using the near-field DART stations to constrain source finiteness for megathrust earthquakes such as the recent Tohoku, Haida Gwaii and Chile earthquakes.

  5. Local near instantaneously dynamically triggered aftershocks of large earthquakes

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; Shearer, Peter M.

    2016-09-01

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks.

  6. Local near instantaneously dynamically triggered aftershocks of large earthquakes.

    PubMed

    Fan, Wenyuan; Shearer, Peter M

    2016-09-01

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks.

  7. Local near instantaneously dynamically triggered aftershocks of large earthquakes.

    PubMed

    Fan, Wenyuan; Shearer, Peter M

    2016-09-01

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks. PMID:27609887

  8. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models.

    PubMed

    Landes, François P; Lippiello, E

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  9. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models.

    PubMed

    Landes, François P; Lippiello, E

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics. PMID:27300821

  10. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models

    NASA Astrophysics Data System (ADS)

    Landes, François P.; Lippiello, E.

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  11. EVIDENCE FOR THREE MODERATE TO LARGE PREHISTORIC HOLOCENE EARTHQUAKES NEAR CHARLESTON, S. C.

    USGS Publications Warehouse

    Weems, Robert E.; Obermeier, Stephen F.; Pavich, Milan J.; Gohn, Gregory S.; Rubin, Meyer; Phipps, Richard L.; Jacobson, Robert B.

    1986-01-01

    Earthquake-induced liquefaction features (sand blows), found near Hollywood, S. C. , have yielded abundant clasts of humate-impregnated sand and sparse pieces of wood. Radiocarbon ages for the humate and wood provide sufficient control on the timing of the earthquakes that produced the sand blows to indicate that at least three prehistoric liquefaction-producing earthquakes (m//b approximately 5. 5 or larger) have occurred within the last 7,200 years. The youngest documented prehistoric earthquake occurred around 800 A. D. A few fractures filled with virtually unweathered sand, but no large sand blows, can be assigned confidently to the historic 1886 Charleston earthquake.

  12. Earthquake triggering by slow earthquake propagation: the case of the large 2014 slow slip event in Guerrero, Mexico.

    NASA Astrophysics Data System (ADS)

    Radiguet, M.; Perfettini, H.; Cotte, N.; Gualandi, A.; Kostoglodov, V.; Lhomme, T.; Walpersdorf, A.; Campillo, M.; Valette, B.

    2015-12-01

    Since their discovery nearly two decades ago, the importance of slow slip events (SSEs) in the processes of strain accommodation in subduction zones has been revealed. Nevertheless, the influence of slow aseismic slip on the nucleation of large earthquakes remains unclear. In this study, we focus on the Guerrero region of the Central American subduction zone in Mexico, where large SSEs have been observed since 1998, with a recurrence period of about 4 years, and produce aseismic slip in the Guerrero seismic gap. We investigate the large 2014 SSE (equivalent Mw=7.7), which initiated in early 2014 and lasted until the end of October 2014. During this time period, the 18 April Papanoa earthquake (Mw7.2) occurred on the western limit of the Guerrero gap. We invert the continuous GPS time series using the PCAIM (Principal Component Analysis Inversion Method) to assess the space and time evolution of slip on the subduction. To focus on the aseismic processes, we correct the cGPS time series from the co-seismic offsets. Our results show that the slow slip event initiated in the Guerrero gap region, as already observed during the previous SSEs. The Mw7.2 Papanoa earthquake occurred on the western limit of the region that was slipping aseismically before the earthquake. After the Papanoa earthquake, the aseismic slip rate increases. This geodetic signal consists of both the ongoing SSE and the postseismic (afterslip) response due to the Papanoa earthquake. The majority of the post-earthquake aseismic slip is concentrated downdip from the main earthquake asperity, but significant slip is also observed in the Guerrero gap region. Compared to previous SSEs in that region, the 2014 SSE produced a larger aseismic slip and the maximum slip is located downdip from the main brittle asperity corresponding to the Papanoa earthquake, a region that was not identified as active during the previous SSEs. Since the Mw 7.2 Papanoa earthquake occurred about 2 months after the onset of the

  13. Spectral scaling of the aftershocks of the Tocopilla 2007 earthquake in northern Chile

    NASA Astrophysics Data System (ADS)

    Lancieri, M.; Madariaga, R.; Bonilla, F.

    2012-04-01

    We study the scaling of spectral properties of a set of 68 aftershocks of the 2007 November 14 Tocopilla (M 7.8) earthquake in northern Chile. These are all subduction events with similar reverse faulting focal mechanism that were recorded by a homogenous network of continuously recording strong motion instruments. The seismic moment and the corner frequency are obtained assuming that the aftershocks satisfy an inverse omega-square spectral decay; radiated energy is computed integrating the square velocity spectrum corrected for attenuation at high frequencies and for the finite bandwidth effect. Using a graphical approach, we test the scaling of seismic spectrum, and the scale invariance of the apparent stress drop with the earthquake size. To test whether the Tocopilla aftershocks scale with a single parameter, we introduce a non-dimensional number, ?, that should be constant if earthquakes are self-similar. For the Tocopilla aftershocks, Cr varies by a factor of 2. More interestingly, Cr for the aftershocks is close to 2, the value that is expected for events that are approximately modelled by a circular crack. Thus, in spite of obvious differences in waveforms, the aftershocks of the Tocopilla earthquake are self-similar. The main shock is different because its records contain large near-field waves. Finally, we investigate the scaling of energy release rate, Gc, with the slip. We estimated Gc from our previous estimates of the source parameters, assuming a simple circular crack model. We find that Gc values scale with the slip, and are in good agreement with those found by Abercrombie and Rice for the Northridge aftershocks.

  14. Source Scaling and Ground Motion of the 2008 Wells, Nevada, earthquake sequence

    NASA Astrophysics Data System (ADS)

    Yoo, S.; Dreger, D. S.; Mayeda, K. M.; Walter, W. R.

    2011-12-01

    Dynamic source parameters, such as a corner frequency, stress drop, and radiated energy, are one of the most critical factors controlling ground motions at higher-frequencies (generally greater than 1 Hz), which may cause damage to nearby surface structures. Hence, scaling relation of these parameters can play an important role in assessing the seismic hazard for regions in which records of ground motions from potentially damaging earthquakes are not available. On February 21, 2008 at 14:16 (UTC), a magnitude 6 earthquake occurred near Wells, Nevada, where characterized by low rate of seismicity. For their aftershocks, a marked discrepancy between the observed and predicted ground motions from empirical ground motion prediction equation was reported (Petersen et al., 2011). To evaluate and understand these observed ground motions, we investigate the dynamic source parameters and their scaling relation for this earthquake sequence. We estimate the source parameters of the earthquakes using the coda spectral ratio method (Mayeda et al., 2007) and examine the estimates with the observed spectral accelerations at higher frequencies. From the derived source parameters and scaling relation, we compute synthetic ground motions of the earthquakes using fractal composite source model (e.g., Zeng et al., 1994) and compare these synthetic ground motions with the observed ground motions and synthetic ground motions obtained from self-similar source scaling relation. In our preliminary results, we find the stress drops of the aftershocks are systematically 2-5 times lower than a stress drop of the mainshock. This agrees well with systematic overestimation of the predicted ground motions for the aftershocks. The simulated ground motions from the coda-derived scaling relation better explains the observed both weak and strong ground motions than that of from the size independent stress drop scaling relation. Assuming that the scale dependent stress drop is real, at least in some

  15. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  16. A bilinear source-scaling model for M-log a observations of continental earthquakes

    USGS Publications Warehouse

    Hanks, T.C.; Bakun, W.H.

    2002-01-01

    The Wells and Coppersmith (1994) M-log A data set for continental earthquakes (where M is moment magnitude and A is fault area) and the regression lines derived from it are widely used in seismic hazard analysis for estimating M, given A. Their relations are well determined, whether for the full data set of all mechanism types or for the subset of strike-slip earthquakes. Because the coefficient of the log A term is essentially 1 in both their relations, they are equivalent to constant stress-drop scaling, at least for M ??? 7, where most of the data lie. For M > 7, however, both relations increasingly underestimate the observations with increasing M. This feature, at least for strike-slip earthquakes, is strongly suggestive of L-model scaling at large M. Using constant stress-drop scaling (???? = 26.7 bars) for M ??? 6.63 and L-model scaling (average fault slip u?? = ??L, where L is fault length and ?? = 2.19 × 10-5) at larger M, we obtain the relations M = log A + 3.98 ?? 0.03, A ??? 537 km2 and M = 4/3 log A + 3.07 ?? 0.04, A > 537 km2. These prediction equations of our bilinear model fit the Wells and Coppersmith (1994) data set well in their respective ranges of validity, the transition magnitude corresponding to A = 537 km2 being M = 6.71.

  17. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    PubMed

    duPont, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake.

  18. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    PubMed

    duPont, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  19. Nucleation of Laboratory Earthquakes: Observation, Characterization, and Scaling up to the Natural Earthquakes Dimensions

    NASA Astrophysics Data System (ADS)

    Latour, S.; Schubnel, A.; Nielsen, S. B.; Madariaga, R. I.; Vinciguerra, S.

    2013-12-01

    In this work we observe the nucleation phase of in-plane ruptures in the laboratory and characterize its dynamics. We use a laboratory toy-model, where mode II shear ruptures are produced on a pre-cut fault in a plate of polycarbonate. The fault is cut at the critical angle that allows a stick-slip behavior under uniaxal loading. The ruptures are thus naturally nucleated. The material is birefringent under stress, so that the rupture propagation can be followed by ultra-rapid elastophotometry. A network of acoustic sensors and accelerometers is disposed on the plate to measure the radiated wavefield and record laboratory near-field accelograms. The far field stress level is also measured using strain gages. We show that the nucleation is composed of two distinct phases, a quasi-static and an acceleration stage, followed by dynamic propagation. We propose an empirical model which describes the rupture length evolution: the quasi-static phase is described by an exponential growth while the acceleration phase is described by an inverse power law of time. The transition from quasistatic to accelerating rupture is related to the critical nucleation length, which scales inversely with normal stress in accordance with theoretical predictions, and to a critical surfacic power, which may be an intrinsic property of the interface. Finally, we discuss these results in the frame of previous studies and propose a scaling up to natural earthquake dimensions. Three spontaneously nucleated laboratory earthquakes at increasingly higher normal pre-stresses, visualized by photo-elasticity. The red curves highlight the position of rupture tips as a function of time. We propose an empirical model that describes the dynamics of rupture nucleation and discuss its scaling with the initial normal stress.

  20. Small Earthquake Scaling Revisited: Can it Constrain Slip Weakening?

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.; Rice, J. R.

    2001-12-01

    final slip s. This can match the observations, but implies the unlikely result that the weakening behavior of the fault depends on the final size of the earthquake. We also find that a single slip-weakening function τ (s) is able to match the observations, requiring no such correlation. Fitting G over s = 1 mm to 0.5 m with G ~ s{1 + n}, we find n ~ 0.1 to 0.2. We show that this implies a strength drop from peak τ p - τ (s) ~ sn. This model also implies that the slip weakening continues beyond the final slip s of typical events smaller than ~M6, and that the total strength drop τ p - τ d for large earthquakes is typically > 20 MPa and notably larger than Δ τ . The latter suggests that on average the fault is initially stressed well below the peak strength, requiring stress concentration at the rupture front to propagate slipping. Other interpretations need to be explored outside the context of slip-weakening and allowing for dynamic over/undershoot.

  1. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  2. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Shuler, Ashley

    2014-05-01

    Rifting episodes accommodate the relative motion of mature divergent plate boundaries with sequences of magma-filled dikes that compensate for the missing volume due to crustal splitting. Two major rifting episodes have been recorded since modern monitoring techniques are available: the 1975-1984 Krafla (Iceland) and the 2005-2010 Manda-Hararo (Ethiopia) dike sequences. The statistical properties of the frequency of dike intrusions during rifting have never been investigated in detail, but it has been suggested that they may have similarities with earthquake mainshock-aftershock sequences, for example they start with a large intrusion followed by several events of smaller magnitude. The scaling relationships of earthquakes have on the contrary been widely investigated: earthquakes have been found to follow a power law, the Gutenberg-Richter relation, from local to global scale, while the decay of aftershocks with time has been found to follow the Omori law. These statistical laws for earthquakes are the basis for hazard evaluation and the physical mechanisms behind them are the object of wide interest and debate. Here we investigate in detail the statistics of dikes from the Krafla and Manda-Hararo rifting episodes, including their frequency-magnitude distribution, the release of geodetic moment in time, the correlation between interevent times and intruded volumes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, the long-term release of geodetic moment is governed by a relationship consistent with the Omori law, and the intrusions are roughly time-predictable. The need of magma availability affects however the timing of secondary dike intrusions: such timing is longer after large volume intrusions, contrarily to aftershock sequences where interevent times shorten after large events.

  3. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  4. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  5. Very short-term earthquake precursors from GPS signal interference: Case studies on moderate and large earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, Yu-Lien; Cheng, Kai-Chien; Wang, Wei-Hau; Yu, Shui-Beih

    2016-04-01

    We set up a GPS network with 17 Continuous GPS (CGPS) stations in southwestern Taiwan to monitor real-time crustal deformation. We found that systematic perturbations in GPS signals occurred just a few minutes prior to the occurrence of several moderate and large earthquakes, including the recent 2013 Nantou (ML = 6.5) and Rueisuei (ML = 6.4) earthquakes in Taiwan. The anomalous pseudorange readings were several millimeters higher or lower than those in the background time period. These systematic anomalies were found as a result of interference of GPS L-band signals by electromagnetic emissions (EMs) prior to the mainshocks. The EMs may occur in the form of harmonic or ultra-wide-band radiation and can be generated during the formation of Mode I cracks at the final stage of earthquake nucleation. We estimated the directivity of the likely EM sources by calculating the inner product of the position vector from a GPS station to a given satellite and the vector of anomalous ground motions recorded by the GPS. The results showed that the predominant inner product generally occurred when the satellite was in the direction either toward or away from the epicenter with respect to the GPS network. Our findings suggest that the GPS network may serve as a powerful tool to detect very short-term earthquake precursors and presumably to locate a large earthquake before it occurs.

  6. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  7. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  8. Calibration of the landsliding numerical model SLIPOS and prediction of the seismically induced erosion for several large earthquakes scenarios

    NASA Astrophysics Data System (ADS)

    Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark

    2016-04-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.

  9. The quest for better quality-of-life - learning from large-scale shaking table tests

    NASA Astrophysics Data System (ADS)

    Nakashima, M.; Sato, E.; Nagae, T.; Kunio, F.; Takahito, I.

    2010-12-01

    Earthquake engineering has its origins in the practice of “learning from actual earthquakes and earthquake damages.” That is, we recognize serious problems by witnessing the actual damage to our structures, and then we develop and apply engineering solutions to solve these problems. This tradition in earthquake engineering, i.e., “learning from actual damage,” was an obvious engineering response to earthquakes and arose naturally as a practice in a civil and building engineering discipline that traditionally places more emphasis on experience than do other engineering disciplines. But with the rapid progress of urbanization, as society becomes denser, and as the many components that form our society interact with increasing complexity, the potential damage with which earthquakes threaten the society also increases. In such an era, the approach of ”learning from actual earthquake damages” becomes unacceptably dangerous and expensive. Among the practical alternatives to the old practice is to “learn from quasi-actual earthquake damages.” One tool for experiencing earthquake damages without attendant catastrophe is the large shaking table. E-Defense, the largest one we have, was developed in Japan after the 1995 Hyogoken-Nanbu (Kobe) earthquake. Since its inauguration in 2005, E-Defense has conducted over forty full-scale or large-scale shaking table tests, applied to a variety of structural systems. The tests supply detailed data on actual behavior and collapse of the tested structures, offering the earthquake engineering community opportunities to experience and assess the actual seismic performance of the structures, and to help society prepare for earthquakes. Notably, the data were obtained without having to wait for the aftermaths of actual earthquakes. Earthquake engineering has always been about life safety, but in recent years maintaining the quality of life has also become a critical issue. Quality-of-life concerns include nonstructural

  10. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  11. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    PubMed Central

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  12. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    NASA Astrophysics Data System (ADS)

    Parsons, Tom

    2002-09-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ˜7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  13. Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, Tom

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  14. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ??? 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change ???????? ??? 0.01 MPa) the Ms ??? 7.0 shocks are associated with calculated shear stress increases, while ???39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ???7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ??? 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  15. Imaging of Large Earthquake Rupture Processes Using Multiple Teleseismic Arrays: Application to the Sumatra-Andaman Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Ohrnberger, M.; Krüger, F.

    2005-12-01

    The spatial extent of large earthquake ruptures is usually indirectly inferred from aftershock distributions or by waveform inversion techniques. In this work we present a method which allows the direct estimation of the spatio-temporal characteristics of large earthquake rupture processes. The technique exploits the high-quality records from the stations of the global broadband network using a simple, yet efficient, migration technique. In particular, we combine coherency and beam-power measures which are obtained from curved wavefront stacking of the direct P wave at multiple large aperture arrays surrounding the source region at tele-seismic distances. Applying this method to the Mw=9.3 Sumatra earthquake from 26/12/2004 and the subsequent Nias earthquake from 28/03/2005 (Mw=8.7), we show that it is possible to track the focus of the most coherent/largest energy release in space and time. For the Sumatra event, we confirm the overall extent of the rupture length being in the order of 1150 km. The rupture front propagated during a time span of at least 480-500 s following the trench geometry from the northern tip of Sumatra to the Andaman Islands region. A visualization of the coherent energy accumulation over time suggests the existence of slow after-slip in the northern part of the rupture after the main rupture front has passed. However, due to the interference of large later phases it is not possible to determine whether this afterslipping event persists much longer then the overall duration of the rupture. The final areal estimate of cumulative energy release is in full agreement with the aftershock distribution observed in the months following this earthquake. Including a number of additional seismic phases (e.g. pP, sP) into the migration scheme, it seems for this event feasible to constrain the depth extent of the rupture. For the Nias earthquake we observe unilateral propagation of the rupture in south-eastern direction starting from an area south

  16. Long-Term Prediction of Large Earthquakes: When Does Quasi-Periodic Behavior Occur?

    NASA Astrophysics Data System (ADS)

    Sykes, L. R.

    2003-12-01

    I argue that the prediction of large earthquakes for time scales of a few decades is possible for a number of fault segments along transform and subduction plate boundaries. A key parameter in ascertaining if forecasting is feasible is the size of the coefficient of variation, CV, the standard deviation of inter-event times of large earthquakes that rupture all or most of a given fault segment divided by T, the average repeat time. I address only large events, ones that rupture all or most of the downdip width of the seismogenic zone where velocity-weakening behavior occurs. Historic and paleoseismic data indicate that the segment that ruptured in the great 1946 Nankaido, Japan, earthquake broke 9 times in the previous 1060 years yielding T=118 years and CV=0.16. The adjacent zone that broke in 1944 exhibits similar behavior as does the Copper River delta, the site of 8 paleoseismic events dated by Plafker and Rubin (1994) above the rupture zone of the 1964 Alaska earthquake. Lindh (preceding abstract) finds that many fault segments in California have similar small values of CV. Paleoseismic data for inter-event times at Pallet Creek and Wrightwood, however, indicate a large CV. Those sites at situated along the San Andreas fault near the end of the 1857 rupture zone where slip was much smaller than in the Carrizo plain, rupture in large events to the northwest and southeast overlap and deformation is multibranched as plate motion is transferred in part to the San Jacinto fault. Plate boundary slip is confined to narrow zones along the 1944 and 1946 segments of the Nankai trough but is more diffuse in the Tokai-Suruga Bay region where the Izu Peninsula is colliding with the rest of Honshu and repeat times appear to be longer (and CV perhaps is larger). Dates of uplifted terraces likely give repeat times of inter-plate thrust events that are too long and large estimates of CV since imbricate faults within the upper plate that generate terraces do not rupture in

  17. The characteristic of the building damage from historical large earthquakes in Kyoto

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  18. Some Considerations on a Large Landslide at the Left Bank of the Aratozawa Dam Caused by the 2008 Iwate-Miyagi Intraplate Earthquake

    NASA Astrophysics Data System (ADS)

    Aydan, Ömer

    2016-06-01

    The scale and impact of rock slope failures are very large and the form of failure differs depending upon the geological structures of slopes. The 2008 Iwate-Miyagi intraplate earthquake induced many large-scale slope failures, despite the magnitude of the earthquake being of intermediate scale. Among large-scale slope failures, the landslide at the left bank of the Aratozawa Dam site is of great interest to specialists of rock mechanics and rock engineering. Although the slope failure was of planar type, the direction of sliding was luckily towards the sub-valley, so that the landslide did not cause great tsunami-like motion of reservoir fluid. In this study, the author attempts to describe the characteristics of the landslide, strong motion and permanent ground displacement induced by the 2008 Iwate-Miyagi intraplate earthquake, which had great effects on the triggering and evolution of the landslide.

  19. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  20. Quiet zone within a seismic gap near western Nicaragua: Possible location of a future large earthquake

    USGS Publications Warehouse

    Harlow, D.H.; White, R.A.; Cifuentes, I.L.; Aburto, Q.A.

    1981-01-01

    A 5700-square-kilometer quiet zone occurs in the midst of the locations of more than 4000 earthquakes off the Pacific coast of Nicaragua. The region is indicated by the seismic gap technique to be a likely location for an earthquake of magnitude larger than 7. The quiet zone has existed since at least 1950; the last large earthquake originating from this area occurred in 1898 and was of magnitude 7.5. A rough estimate indicates that the magnitude of an earthquake rupturing the entire quiet zone could be as large as that of the 1898 event. It is not yet possible to forecast a time frame for the occurrence of such an earthquake in the quiet zone. Copyright ?? 1981 AAAS.

  1. W-phase Source Inversion Using High-rate Regional GPS Data for Large Earthquakes.

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F. J.; Melgar, D.; Benavente, R. F.; Campos, J. A.

    2015-12-01

    W-phase moment tensor inversions have been proven to be a reliable method for rapid characterization for large earthquakes. W-phase is a long period seismic (100s-1000s) wave that arrives between P and S waves and can be synthesized by normal mode summation. For global purposes it has been used at USGS, PTWC and IPGS. These implementations provide moment tensor solutions within 30-60 min after the origin time of moderate and large worldwide earthquakes. W-phase inversion has been successfully implemented at the Chilean National Seismological Center (CSN) for regional distances (5º-12º) obtaining the first solution ~6 minutes after the earthquake. However until now it has been used only with broadband instruments, which saturate in the near field. Therefore, we use near field records from high-rate regional GPS data for some large earthquakes that have occurred in the past five years and with relatively dense azimuthal and station density coverage.Originally the inversion takes the time interval between Tp and Tp + 15 *delta (distance from the epicenter in degrees). In the near field W-phase doesn't develop as well as in the intermediate or far field, therefore we increased the time window for these events. Here we tried different time windows to find the most accurate result for each earthquake and to reduce time response for tsunami early warning purposes. We took near field GPS for the following earthquakes: The 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.1 Tohoku Earthquake, The 2014 Mw 8.2 Iquique Earthquake, and The 2014 Mw 7.8 Iquique Aftershock. The solutions for the examples tested here are potentially available 5 min after the origin time. The calculated magnitude for each earthquake is: Mw 8.9 for the Maule earthquake, Mw 9.1 for the Tohoku earthquake, Mw 7.9 for the Iquique earthquake, and Mw 7.8 for the Iquique aftershock. The mechanisms, as expected, are thrust with some variations with respect to the WCMT from National Earthquake Information Center

  2. The AD 365 earthquake: high resolution tsunami inundation for Crete and full scale simulation exercise

    NASA Astrophysics Data System (ADS)

    Kalligeris, N.; Flouri, E.; Okal, E.; Synolakis, C.

    2012-04-01

    In the eastern Mediterranean, historical and archaeological records document major earthquake and tsunami events in the past 2000 year (Ambraseys and Synolakis, 2010). The 1200km long Hellenic Arc has allegedly caused the strongest reported earthquakes and tsunamis in the region. Among them, the AD 365 and AD 1303 tsunamis have been extensively documented. They are likely due to ruptures of the Central and Eastern segments of the Hellenic Arc, respectively. Both events had widespread impact due to ground shaking, and e triggered tsunami waves that reportedly affected the entire eastern Mediterranean. The seismic mechanism of the AD 365 earthquake, located in western Crete, has been recently assigned a magnitude ranging from 8.3 to 8.5 by Shaw et al., (2008), using historical, sedimentological, geomorphic and archaeological evidence. Shaw et al (2008) have inferred that such large earthquakes occur in the Arc every 600 to 800 years, with the last known the AD 1303 event. We report on a full-scale simulation exercise that took place in Crete on 24-25 October 2011, based on a scenario sufficiently large to overwhelm the emergency response capability of Greece and necessitating the invocation of the Monitoring and Information Centre (MIC) of the EU and triggering help from other nations . A repeat of the 365 A.D. earthquake would likely overwhelm the civil defense capacities of Greece. Immediately following the rupture initiation it will cause substantial damage even to well-designed reinforced concrete structures in Crete. Minutes after initiation, the tsunami generated by the rapid displacement of the ocean floor would strike nearby coastal areas, inundating great distances in areas of low topography. The objective of the exercise was to help managers plan search and rescue operations, identify measures useful for inclusion in the coastal resiliency index of Ewing and Synolakis (2011). For the scenario design, the tsunami hazard for the AD 365 event was assessed for

  3. Coseismic Slip Distributions of Great or Large Earthquakes in the Northern Japan to Kurile Subduction Zone

    NASA Astrophysics Data System (ADS)

    Harada, T.; Satake, K.; Ishibashi, K.

    2011-12-01

    Slip distributions of great and large earthquakes since 1963 along the northern Japan and Kuril trenches are examined to study the recurrence of interplate, intraslab and outer-rise earthquakes. The main findings are that the large earthquakes in 1991 and 1995 reruptured the 1963 great Urup earthquake source, and the 2006, 2007 and 2009 Simshir earthquakes were all different types. We also identify three seismic gaps. The northern Japan to southern Kurile trenches have been regarded as a typical subduction zone with spatially and temporally regular recurrence of great (M>8) interplate earthquakes. The source regions were grouped into six segments by Utsu (1972; 1984). The Headquarters for Earthquake Research Promotion of the Japanese government (2004) divided the southern Kurile subduction zone into four regions and evaluated future probabilities of great interplate earthquakes. Besides great interplate events, however, many large (M>7) interplate, intraslab, outer-rise and tsunami earthquakes have also occurred in this region. Harada, Ishibashi, and Satake (2010, 2011) depicted the space-time pattern of M>7 earthquakes along the northern Japan to Kuril trench, based on the relocated mainshock-aftershock distributions of all types of earthquakes occurred since 1913. The space-time pattern is more complex than that had been considered conventionally. Each region has been ruptured by a M8-class interplate earthquake or by multiple M7-class events. In this study, in order to examine more detail space pattern, or rupture areas, of M>7 earthquakes since 1963 (WWSSN waveform data have been available since this year), we estimated cosiesmic slip distributions by the Kikuchi and Kanamori's (2003) teleseismic body wave inversion method. The WWSSN waveform data were used for earthquakes before 1990, and digital teleseismic waveform data compiled by the IRIS were used for events after 1990. Main-shock hypocenters that had been relocated by our previous study were used as

  4. Interseismic Coupling Models and their interactions with the Sources of Large and Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Chlieh, M.; Perfettini, H.; Avouac, J. P.

    2009-04-01

    Recent observations of heterogeneous strain build up reported from subduction zones and seismic sources of large and great interplate earthquakes indicate that seismic asperities are probably persistent features of the megathrust. The Peru Megathrust produce recurrently large seismic events like the 2001 Mw 8.4, Arequipa earthquake or the 2007 Mw 8.0, Pisco earthquake. The peruvian subduction zone provide an exceptional opportunity to understand the eventual relationship between interseismic coupling, large megathrust ruptures and the frictional properties of the megathrust. An emerging concept is a megathrust with strong locked fault patches surrounded by aseismic slip. The 2001, Mw 8.4 Arequipa earthquake ruptured only the northern portion of the patch that had ruptured already during the great 1868 Mw~8.8 earthquake and that had remained locked in the interseismic period. The 2007 Mw 8.0 Pisco earthquake ruptured the southern portion of the 1746 Mw~8.5 event. The moment released in 2007 amounts to only a small fraction of the deficit of moment that had accumulated since the 1746 great earthquake. Then, the potential for future large megathrust events in Central and Southern Peru area remains large. These recent earthquakes indicate that a same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or jointly to produce a larger rupture. The spatial distribution of frictional properties of the megathrust could be the cause for a more complex earthquakes sequence from one seismic cycle to another. The subduction of geomorphologic structure like the Nazca ridge could be the cause for a lower coupling there.

  5. Three-dimensional distribution of ionospheric anomalies prior to three large earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    He, Liming; Heki, Kosuke

    2016-07-01

    Using regional Global Positioning System (GPS) networks, we studied three-dimensional spatial structure of ionospheric total electron content (TEC) anomalies preceding three recent large earthquakes in Chile, South America, i.e., the 2010 Maule (Mw 8.8), the 2014 Iquique (Mw 8.2), and the 2015 Illapel (Mw 8.3) earthquakes. Both positive and negative TEC anomalies, with areal extent dependent on the earthquake magnitudes, appeared simultaneously 20-40 min before the earthquakes. For the two midlatitude earthquakes (2010 Maule and 2015 Illapel), positive anomalies occurred to the north of the epicenters at altitudes 150-250 km. The negative anomalies occurred farther to the north at higher altitudes 200-500 km. This lets the epicenter, the positive and negative anomalies align parallel with the local geomagnetic field, which is a typical structure of ionospheric anomalies occurring in response to positive surface electric charges.

  6. Preliminary investigation of some large landslides triggered by the 2008 Wenchuan earthquake, Sichuan Province, China

    USGS Publications Warehouse

    Wang, F.; Cheng, Q.; Highland, L.; Miyajima, M.; Wang, Hongfang; Yan, C.

    2009-01-01

    The M s 8.0 Wenchuan earthquake or "Great Sichuan Earthquake" occurred at 14:28 p.m. local time on 12 May 2008 in Sichuan Province, China. Damage by earthquake-induced landslides was an important part of the total earthquake damage. This report presents preliminary observations on the Hongyan Resort slide located southwest of the main epicenter, shallow mountain surface failures in Xuankou village of Yingxiu Town, the Jiufengchun slide near Longmenshan Town, the Hongsong Hydro-power Station slide near Hongbai Town, the Xiaojiaqiao slide in Chaping Town, two landslides in Beichuan County-town which destroyed a large part of the town, and the Donghekou and Shibangou slides in Qingchuan County which formed the second biggest landslide lake formed in this earthquake. The influences of seismic, topographic, geologic, and hydro-geologic conditions are discussed. ?? 2009 Springer-Verlag.

  7. The role of fracture energy in earthquake stress drop: Should apparent stress scale with seismic moment?

    NASA Astrophysics Data System (ADS)

    Circone, S.; Beeler, N. M.; Wong, T.

    2001-12-01

    To model dissipated and radiated energy during earthquake stress drop, we calculate dynamic fault slip using a single degree of freedom spring-slider. The slider-block model is scaled to earthquake size assuming a circular rupture; stiffness varies inversely with rupture radius, and rupture duration is proportional to radius. We first use a laboratory-based static/kinetic fault strength relation, with a dynamic stress drop Δ τ d proportional to effective normal stress but with no fracture energy. Calculated seismic efficiency η , the ratio of radiated to total energy expended during stress drop, and overshoot ξ , a measure of how much the static stress drop exceeds the dynamic stress drop, are constant, independent of normal stress and scale. Calculated η is small and in good agreement with laboratory measurements and field observations from small mining- and borehole-induced earthquakes. If instead a linear slip weakening fault strength, with a well defined apparent fracture energy G, is used in the calculation, the apparent stress τ a, the stress measure of radiated energy, is τ a=Δ τ d}(0.5-ξ )/(1-ξ )-τ {c. Here τ c}=kd{*(1-ξ )/2 is the "fracture stress", the stress measure of apparent fracture energy, d* is the slip weakening distance, and k is the ratio of static stress drop to total slip. Since k scales with earthquake size, apparent stress is a function of event size. For our slider block model, k is simply the unloading stiffness. If constant G is used, as in the classic Griffith fracture criterion, τ a for small earthquakes varies systematically with event size due to changes in the relative contribution from G. The predictions are similar to the variation of radiated energy with event size for small earthquakes recorded in the Cajon Pass borehole by Abercrombie [JGR, 100, 1995]. Large events have constant τ a. However, the typical ratio of apparent stress to static stress drop τ a/Δ τ s=0.02 for the Cajon Pass data is an order of

  8. Pattern informatics and its application for forecasting large earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.; Rundle, J. B.; Holliday, J. R.

    2004-12-01

    The 17 January 1995 Kobe, Japan, earthquake was only a magnitude 7.2 event and yet produced an estimated \\$200 billion loss. The magnitude of potential loss of life and property is so great that reliable earthquake forecasting should be at the forefront of research goals, especially in Japan. An approach to earthquake forecasting is Pattern Informatics (PI). The PI technique can be used to detect precursory seismic activation or quiescence and make earthquake forecasts. Application to earthquake data from southern California shows that this method is a powerful technique for forecasting large events. Here, we attempt to forecast Japan earthquakes using the PI method. To insure the completeness of earthquake catalog maintained by Japan Meteorological Agency, events in 1955-1994 around the epicenter of the Kobe event are used for our analyses. This is done for forecasting the occurrence of large future events that are the earthquakes of magnitude greater than 5 for the time period 1995-present, including the Kobe event. Optimizing parameters of the PI method needs to be performed. We also change the extent of our study area to determine the optimal application of the method. Our results show that the method has skill for forecasting the spatial and temporal distribution of the large future earthquakes. Specifically, we find that the occurrence of the Kobe event can correspond to a seismically anomalous region. We further use two statistical tests to evaluate the accuracy for forecasting the large future events. The results of these tests also support that the method has some forecast skill.

  9. Investigation of the Seismic Nucleation Phase of Large Earthquakes Using Broadband Teleseismic Data

    NASA Astrophysics Data System (ADS)

    Burkhart, Eryn Therese

    The dynamic motion of an earthquake begins abruptly, but is often initiated by a short interval of weak motion called the seismic nucleation phase (SNP). Ellsworth and Beroza [1995, 1996] concluded that the SNP was detectable in near-source records of 48 earthquakes with moment magnitude (Mw), ranging from 1.1 to 8.1. They found that the SNP accounted for approximately 0.5% of the total moment and 1/6 of the duration of the earthquake. Ji et al [2010] investigated the SNP of 19 earthquakes with Mw greater than 8.0 using teleseismic broadband data. This study concluded that roughly half of the earthquakes had detectable SNPs, inconsistent with the findings of Ellsworth and Beroza [1995]. Here 69 earthquakes of Mw 7.5-8.0 from 1994 to 2011 are further examined. The SNP is clearly detectable using teleseismic data in 32 events, with 35 events showing no nucleation phase, and 2 events had insufficient data to perform stacking, consistent with the previous analysis. Our study also reveals that the percentage of the SNP events is correlated with the focal mechanism and hypocenter depths. Strike-slip earthquakes are more likely to exhibit a clear SNP than normal or thrust earthquakes. Eleven of 14 strike-slip earthquakes (78.6%) have detectable NSPs. In contrast, only 16 of 40 (40%) thrust earthquakes have detectable SNPs. This percentage also became smaller for deep events (33% for events with hypocenter depth>250 km). To understand why certain thrust earthquakes have a visible SNP, we examined the sediment thickness, age, and angle of the subducting plate of all thrust earthquakes in the study. We found that thrust events with shallow (600 m) on the subducting plate tend to have clear SNPs. If the SNP can be better understood in the future, it may help seismologists better understand the rupture dynamics of large earthquakes. Potential applications of this work could attempt to predict the magnitude of an earthquake seconds before it begins by measuring the SNP, vastly

  10. Exploring the uncertainty range of co-seismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2016-10-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average co-seismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same dataset exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that 1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated; 2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  11. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  12. Scaling characteristics of ULF geomagnetic fields at the Guam seismoactive area and their dynamics in relation to the earthquake

    NASA Astrophysics Data System (ADS)

    Smirnova, N.; Hayakawa, M.; Gotoh, K.; Volobuev, D.

    The long-term evolution of scaling (fractal) characteristics of the ULF geomagnetic fields in the seismoactive region of the Guam Island is studied in relation to the strong (Ms = 8.0) nearby earthquake of 8 August 1993. The selected period covers 10 months before and 10 months after the earthquake. The FFT procedure, Burlaga-Klein approach and Higuchi method, have been applied to calculate the scaling exponents and fractal dimensions of the ULF time series. It is found that the spectrum of ULF emissions exhibits, on average, a power law behaviour S(f ) α f -b , which is a fingerprint of the typical fractal (self-affine) time series. The spectrum slope b fluctuates quasi-periodically during the course of time in a range of b = 2.5-0.7, which corresponds to the fractional Brownian motion with both persistent and antipersistent behaviour. An tendency is also found for the spectrum slope to decrease gradually when approaching the earthquake date. Such a tendency manifests itself at all local times, showing a gradual evolution of the structure of the ULF noise to a typical flicker noise structure in proximity to the large earthquake event. We suggest considering such a peculiarity as an earthquake precursory signature. One more effect related to the earthquake is revealed: the longest quasi-period, which is 27 days, disappeared from the variations of the ULF emission spectrum slope during the earthquake, and it reappeared three months after the event. Physical interpretation of the peculiarities revealed has been done on the basis of the SOC (self-organized criticality) concept.

  13. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  14. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  15. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  16. Benefits of Earthquake Early Warning to Large Municipalities (Invited)

    NASA Astrophysics Data System (ADS)

    Featherstone, J.

    2013-12-01

    The City of Los Angeles has been involved in the testing of the Cal Tech Shake Alert, Earthquake Early Warning (EQEW) system, since February 2012. This system accesses a network of seismic monitors installed throughout California. The system analyzes and processes seismic information, and transmits a warning (audible and visual) when an earthquake occurs. In late 2011, the City of Los Angeles Emergency Management Department (EMD) was approached by Cal Tech regarding EQEW, and immediately recognized the value of the system. Simultaneously, EMD was in the process of finalizing a report by a multi-discipline team that visited Japan in December 2011, which spoke to the effectiveness of EQEW for the March 11, 2011 earthquake that struck that country. Information collected by the team confirmed that the EQEW systems proved to be very effective in alerting the population of the impending earthquake. The EQEW in Japan is also tied to mechanical safeguards, such as the stopping of high-speed trains. For a city the size and complexity of Los Angeles, the implementation of a reliable EQEW system will save lives, reduce loss, ensure effective and rapid emergency response, and will greatly enhance the ability of the region to recovery from a damaging earthquake. The current Shake Alert system is being tested at several governmental organizations and private businesses in the region. EMD, in cooperation with Cal Tech, identified several locations internal to the City where the system would have an immediate benefit. These include the staff offices within EMD, the Los Angeles Police Department's Real Time Analysis and Critical Response Division (24 hour crime center), and the Los Angeles Fire Department's Metropolitan Fire Communications (911 Dispatch). All three of these agencies routinely manage the collaboration and coordination of citywide emergency information and response during times of crisis. Having these three key public safety offices connected and included in the

  17. Real-time monitoring of fine-scale changes in fault and earthquake properties

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2014-12-01

    The high-resolution back-processing and re-analysis of long-term seismic archives has generated new data that provide insight into the fine-scale structures of active faults and seismogenic processes that control them. Such high-precision studies are typically carried out retro-actively, for a specific time period and/or fault of interest. For the last 5 years we have been operating a real-time system, DD-RT, that uses waveform cross-correlation and double-difference algorithms to automatically compute high-precision (10s to 100s of meters) locations of new earthquakes recorded by the Northern California Seismic System. These locations are computed relative to a high-resolution, 30 year long background archive that includes over half a million earthquakes, 20 million seismograms, and 1.7 billion correlation measurements. In this paper we present results from using the DD-RT system and its relational database to monitor changes in earthquake and fault properties at the scale of individual events. We developed baseline characteristics for repeating earthquakes, fore- and aftershock sequences, and fault zone properties, against which we evaluate new events in near real-time. We developed these baseline characteristics from a comprehensive analysis of the double-difference archive, and developed real-time modules that plug into the DD-RT system for monitoring deviations from these baselines. For example, we defined baseline characteristics for 8,500 repeating earthquake sequences, including more than 25,000 events, that were found in an extensive search across Northern California. Precise measurements of relative hypocenter positions, differential magnitudes, and waveform similarity are used to automatically associate new member events to existing sequences. This allows us to monitor changes relative to baseline parameters such as recurrence intervals and their coefficient of variation (CV). Alerting of such changes are especially important for large sequences of

  18. Geodetic characteristic of the postseismic deformation following the interplate large earthquake along the Japan Trench (Invited)

    NASA Astrophysics Data System (ADS)

    Ohta, Y.; Hino, R.; Ariyoshi, K.; Matsuzawa, T.; Mishina, M.; Sato, T.; Inazu, D.; Ito, Y.; Tachibana, K.; Demachi, T.; Miura, S.

    2013-12-01

    On March 9, 2011 at 2:45 (UTC), an M7.3 interplate earthquake (hereafter foreshock) occurred ~45 km northeast of the epicenter of the M9.0 2011 Tohoku earthquake. This foreshock preceded the 2011 Tohoku earthquake by 51 hours. Ohta et al., (2012, GRL) estimated co- and postseismic afterslip distribution based on a dense GPS network and ocean bottom pressure gauge sites. They found the afterslip distribution was mainly concentrated in the up-dip extension of the coseismic slip. The coseismic slip and afterslip distribution of the foreshock were also located in the slip deficit region (between 20-40m slip) of the coiseismic slip of the M9.0 mainshock. The slip amount for the afterslip is roughly consistent with that determined by repeating earthquake analysis carried out in a previous study (Kato et al., 2012, Science). The estimated moment release for the afterslip reached magnitude 6.8, even within a short time period of 51 hours. They also pointed out that a volumetric strainmeter time series suggests that this event advanced with a rapid decay time constant (4.8 h) compared with other typical large earthquakes. The decay time constant of the afterslip may reflect the frictional property of the plate interface, especially effective normal stress controlled by fluid. For verification of the short decay time constant of the foreshock, we investigated the postseismic deformation characteristic following the 1989 and 1992 Sanriku-Oki earthquakes (M7.1 and M6.9), 2003 and 2005 Miyagi-Oki earthquakes (M6.8 and M7.2), and 2008 Fukushima-Oki earthquake (M6.9). We used four components extensometer at Miyako (39.59N, 141.98E) on the Sanriku coast for 1989 and 1992 event. For 2003, 2005 and 2008 events, we used volumetric strainmeter at Kinka-zan (38.27N, 141.58E) and Enoshima (38.27N, 141.60E). To extract the characteristics of the postseismic deformation, we fitted the logarithmic function. The estimated decay time constants for each earthquake had almost similar range (1

  19. Repeating and not so Repeating Large Earthquakes in the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Singh, S.; Iglesias, A.; Perez-Campos, X.

    2013-12-01

    The rupture area and recurrence interval of large earthquakes in the mexican subduction zone are relatively small and almost the entire length of the zone has experienced a large (Mw≥7.0) earthquake in the last 100 years (Singh et al., 1981). Several segments have experienced multiple large earthquakes in this time period. However, as the rupture areas of events prior to 1973 are only approximately known, the recurrence periods are uncertain. Large earthquakes occurred in the Ometepec, Guerrero, segment in 1937, 1950, 1982 and 2012 (Singh et al., 1981). In 1982, two earthquakes (Ms 6.9 and Ms 7.0) occurred about 4 hours apart, one apparently downdip from the other (Astiz & Kanamori, 1984; Beroza et al. 1984). The 2012 earthquake on the other hand had a magnitude of Mw 7.5 (globalcmt.org), breaking approximately the same area as the 1982 doublet, but with a total scalar moment about three times larger than the 1982 doublet combined. It therefore seems that 'repeat earthquakes' in the Ometepec segment are not necessarily very similar one to another. The Central Oaxaca segment broke in large earthquakes in 1928 (Mw7.7) and 1978 (Mw7.7) . Seismograms for the two events, recorded at the Wiechert seismograph in Uppsala, show remarkable similarity, suggesting that in this area, large earthquakes can repeat. The extent to which the near-trench part of the fault plane participates in the ruptures is not well understood. In the Ometepec segment, the updip portion of the plate interface broke during the 25 Feb 1996 earthquake (Mw7.1), which was a slow earthquake and produced anomalously low PGAs (Iglesias et al., 2003). Historical records indicate that a great tsunamigenic earthquake, M~8.6, occurred in the Oaxaca region in 1787, breaking the Central Oaxaca segment together with several adjacent segments (Suarez & Albini 2009). Whether the updip portion of the fault broke in this event remains speculative, although plausible based on the large tsunami. Evidence from the

  20. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  1. Dynamic Response and Ground-Motion Effects of Building Clusters During Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Isbiliroglu, Y. D.; Taborda, R.; Bielak, J.

    2012-12-01

    The objective of this study is to analyze the response of building clusters during earthquakes, the effect that they have on the ground motion, and how individual buildings interact with the surrounding soil and with each other. We conduct a series of large-scale, physics-based simulations that synthesize the earthquake source and the response of entire building inventories. The configuration of the clusters, defined by the total number of buildings, their number of stories, dynamic properties, and spatial distribution and separation, is varied for each simulation. In order to perform these simulations efficiently while recurrently modifying these characteristics without redoing the entire "source to building structure" simulation every time, we use the Domain Reduction Method (DRM). The DRM is a modular two-step finite-element methodology for modeling wave propagation problems in regions with localized features. It allows one to store and reuse the background motion excitation of subdomains without loss of information. Buildings are included in the second step of the DRM. Each building is represented by a block model composed of additional finite-elements in full contact with the ground. These models are adjusted to emulate the general geometric and dynamic properties of real buildings. We conduct our study in the greater Los Angeles basin, using the main shock of the 1994 Northridge earthquake for frequencies up to 5Hz. In the first step of the DRM we use a domain of 82 km x 82 km x 41 km. Then, for the second step, we use a smaller sub-domain of 5.12 km x 5.12 km x 1.28 km, with the buildings. The results suggest that site-city interaction effects are more prominent for building clusters in soft-soil areas. These effects consist in changes in the amplitude of the ground motion and dynamic response of the buildings. The simulations are done using Hercules, the parallel octree-based finite-element earthquake simulator developed by the Quake Group at Carnegie

  2. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  3. Comparison between scaling law and nonparametric Bayesian estimate for the recurrence time of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2009-04-01

    According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89

  4. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  5. Seismicity trends and potential for large earthquakes in the Alaska-Aleutian region

    USGS Publications Warehouse

    Bufe, C.G.; Nishenko, S.P.; Varnes, D.J.

    1994-01-01

    The high likelihood of a gap-filling thrust earthquake in the Alaska subduction zone within this decade is indicated by two independent methods: analysis of historic earthquake recurrence data and time-to-failure analysis applied to recent decades of instrumental data. Recent (May 1993) earthquake activity in the Shumagin Islands gap is consistent with previous projections of increases in seismic release, indicating that this segment, along with the Alaska Peninsula segment, is approaching failure. Based on this pattern of accelerating seismic release, we project the occurrence of one or more M???7.3 earthquakes in the Shumagin-Alaska Peninsula region during 1994-1996. Different segments of the Alaska-Aleutian seismic zone behave differently in the decade or two preceding great earthquakes, some showing acceleration of seismic release (type "A" zones), while others show deceleration (type "D" zones). The largest Alaska-Aleutian earthquakes-in 1957, 1964, and 1965-originated in zones that exhibit type D behavior. Type A zones currently showing accelerating release are the Shumagin, Alaska Peninsula, Delarof, and Kommandorski segments. Time-to-failure analysis suggests that the large earthquakes could occur in these latter zones within the next few years. ?? 1994 Birkha??user Verlag.

  6. Observational constraints on earthquake source scaling: Understanding the limits in resolution

    USGS Publications Warehouse

    Hough, S.E.

    1996-01-01

    I examine the resolution of the type of stress drop estimates that have been used to place observational constraints on the scaling of earthquake source processes. I first show that apparent stress and Brune stress drop are equivalent to within a constant given any source spectral decay between ??1.5 and ??3 (i.e., any plausible value) and so consistent scaling is expected for the two estimates. I then discuss the resolution and scaling of Brune stress drop estimates, in the context of empirical Green's function results from recent earthquake sequences, including the 1992 Joshua Tree, California, mainshock and its aftershocks. I show that no definitive scaling of stress drop with moment is revealed over the moment range 1019-1025; within this sequence, however, there is a tendency for moderate-sized (M 4-5) events to be characterized by high stress drops. However, well-resolved results for recent M > 6 events are inconsistent with any extrapolated stress increase with moment for the aftershocks. Focusing on comer frequency estimates for smaller (M < 3.5) events, I show that resolution is extremely limited even after empirical Green's function deconvolutions. A fundamental limitation to resolution is the paucity of good signal-to-noise at frequencies above 60 Hz, a limitation that will affect nearly all surficial recordings of ground motion in California and many other regions. Thus, while the best available observational results support a constant stress drop for moderate-to large-sized events, very little robust observational evidence exists to constrain the quantities that bear most critically on our understanding of source processes: stress drop values and stress drop scaling for small events.

  7. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-08-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  8. Maximum Magnitude and Recurrence Interval for the Large Earthquakes in the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Hu, C.

    2012-12-01

    Maximum magnitude and recurrence interval of the large earthquakes are key parameters for seismic hazard assessment in the central and eastern United States. Determination of these two parameters is quite difficult in the region, however. For example, the estimated maximum magnitudes of the 1811-12 New Madrid sequence are in the range of M6.6 to M8.2, whereas the estimated recurrence intervals are in the range of about 500 to several thousand years. These large variations of maximum magnitude and recurrence interval for the large earthquakes lead to significant variation of estimated seismic hazards in the central and eastern United States. There are several approaches being used to estimate the magnitudes and recurrence intervals, such as historical intensity analysis, geodetic data analysis, and paleo-seismic investigation. We will discuss the approaches that are currently being used to estimate maximum magnitude and recurrence interval of the large earthquakes in the central United States.

  9. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  10. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  11. Triggering of tsunamigenic aftershocks from large strike-slip earthquakes: Analysis of the November 2000 New Ireland earthquake sequence

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2005-10-01

    The November 2000 New Ireland earthquake sequence started with a Mw = 8.0 left-lateral main shock on 16 November and was followed by a series of aftershocks with primarily thrust mechanisms. The earthquake sequence was associated with a locally damaging tsunami on the islands of New Ireland and nearby New Britain, Bougainville, and Buka. Results from numerical tsunami-propagation models of the main shock and two of the largest thrust aftershocks (Mw > 7.0) indicate that the largest tsunami was caused by an aftershock located near the southeastern termination of the main shock, off the southern tip of New Ireland (Aftershock 1). Numerical modeling and tide gauge records at regional and far-field distances indicate that the main shock also generated tsunami waves. Large horizontal displacements associated with the main shock in regions of steep bathymetry accentuated tsunami generation for this event. Most of the damage on Bougainville and Buka Islands was caused by focusing and amplification of tsunami energy from a ridge wave between the source region and these islands. Modeling of changes in the Coulomb failure stress field caused by the main shock indicate that Aftershock 1 was likely triggered by static stress changes, provided the fault was on or synthetic to the New Britain interplate thrust as specified by the Harvard CMT mechanism. For other possible focal mechanisms of Aftershock 1 and the regional occurrence of thrust aftershocks in general, evidence for static stress change triggering is not as clear. Other triggering mechanisms such as changes in dynamic stress may also have been important. The 2000 New Ireland earthquake sequence provides evidence that tsunamis caused by thrust aftershocks can be triggered by large strike-slip earthquakes. Similar tectonic regimes that include offshore accommodation structures near large strike-slip faults are found in southern California, the Sea of Marmara, Turkey, along the Queen Charlotte fault in British Columbia

  12. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  13. Constraining depth range of S wave velocity decrease after large earthquakes near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Wu, Chunquan; Delorey, Andrew; Brenguier, Florent; Hadziioannou, Celine; Daub, Eric G.; Johnson, Paul

    2016-06-01

    We use noise correlation and surface wave inversion to measure the S wave velocity changes at different depths near Parkfield, California, after the 2003 San Simeon and 2004 Parkfield earthquakes. We process continuous seismic recordings from 13 stations to obtain the noise cross-correlation functions and measure the Rayleigh wave phase velocity changes over six frequency bands. We then invert the Rayleigh wave phase velocity changes using a series of sensitivity kernels to obtain the S wave velocity changes at different depths. Our results indicate that the S wave velocity decreases caused by the San Simeon earthquake are relatively small (~0.02%) and access depths of at least 2.3 km. The S wave velocity decreases caused by the Parkfield earthquake are larger (~0.2%), and access depths of at least 1.2 km. Our observations can be best explained by material damage and healing resulting mainly from the dynamic stress perturbations of the two large earthquakes.

  14. Seismic sequences, swarms, and large earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Amato, Alessandro; Piana Agostinetti, Nicola; Selvaggi, Giulio; Mele, Franco

    2016-04-01

    In recent years, particularly after the L'Aquila 2009 earthquake and the 2012 Emilia sequence, the issue of earthquake predictability has been at the center of the discussion in Italy, not only within the scientific community but also in the courtrooms and in the media. Among the noxious effects of the L'Aquila trial there was an increase of scaremongering and false alerts during earthquake sequences and swarms, culminated in a groundless one-night evacuation in northern Tuscany in 2013. We have analyzed the Italian seismicity of the last decades in order to determine the rate of seismic sequences and investigate some of their characters, including frequencies, min/max durations, maximum magnitudes, main shock timing, etc. Selecting only sequences with an equivalent magnitude of 3.5 or above, we find an average of 30 sequences/year. Although there is an extreme variability in the examined parameters, we could set some boundaries, useful to obtain some quantitative estimates of the ongoing activity. In addition, the historical catalogue is rich of complex sequences in which one main shock is followed, seconds, days or months later, by another event with similar or higher magnitude We also analysed the Italian CPT11 catalogue (Rovida et al., 2011) between 1950 and 2006 to highlight the foreshock-mainshock event couples that were suggested in previous studies to exist (e.g. six couples, Marzocchi and Zhuang, 2011). Moreover, to investigate the probability of having random foreshock-mainshock couples over the investigated period, we produced 1000 synthetic catalogues, randomly distributing in time the events occured in such period. Preliminary results indicate that: (1) all but one of the the so-called foreshock-mainshock pairs found in Marzocchi and Zhuang (2011) fall inside previously well-known and studied seismic sequences (Belice, Friuli and Umbria-Marche), meaning that suggested foreshocks are also aftershocks; and (2) due to the high-rate of the italian

  15. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  16. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  17. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  18. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer

  19. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  20. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  1. Time-Reversal Imaging of seismic sources and application to recent large Earthquakes

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Larmat, C.; Fink, M.; Capdeville, Y.; Tourin, A.

    2006-12-01

    The occurrence of the disastrous Sumatra-Andaman earthquake on dec. 26, 2004 makes it necessary to develop innovative techniques for studying the complex spatio-temporal characteristics of rupture. The concept of time-reversal (hereafter referred to as TR) was previously successfully applied for acoustic waves in many fields such as medical imaging, underwater acoustics and non destructive testing. The increasing power of computers and numerical methods (such as spectral element methods) enables one to simulate more and more accurately the propagation of seismic waves in heterogeneous media and to develop new applications, in particular time reversal in the three-dimensional Earth. We present here the first applications at the global scale of TR with associated reverse movies of seismic waves propagation by sending back time--reversed seismograms. We show that seismic wave energy is refocused at the right location and the right time of the earthquake. When TR is applied to the Sumatra-- Andaman earthquake (26 dec. 2004), the migration of the rupture from the south towards the north is retrieved. All corresponding movies can be downloaded at the following webpage: http://www.gps.caltech.edu/~carene Other applications to recent smaller earthquakes will be also shown. Therefore, the technique of TR is potentially interesting for automatically locating earthquakes in space and time and for constraining the spatio-temporal history of complex earthquakes .

  2. Typical Scenario of Preparation, Implementation, and Aftershock Sequence of a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, Mikhail

    2016-04-01

    We have tried here to construct and examine the typical scenario of a large earthquake occurrence. The Harvard seismic moment GCMT catalog was used to construct the large earthquake generalized space-time vicinity (LEGV) and to investigate the seismicity behavior in LEGV. LEGV was composed of earthquakes falling into the zone of influence of any of the considerable number (100, 300, or 1,000) of largest earthquakes. The LEGV construction is aimed to enlarge the available statistics, diminish a strong random component, and to reveal in result the typical features of pre- and post-shock seismic activity in more detail. In result of the LEGV construction the character of fore- and aftershock cascades was examined in more detail than it was possible without of the use of the LEGV approach. It was shown also that the mean earthquake magnitude tends to increase, and the b-values, mean mb/mw ratios, apparent stress values, and mean depth tend to decrease. Amplitudes of all these anomalies increase with an approach to a moment of the generalized large earthquake (GLE) as a logarithm of time interval from GLE occurrence. Most of the discussed anomalies agree well with a common scenario of development of instability. Besides of such precursors of common character, one earthquake-specific precursor was found. The revealed decrease of mean earthquake depth during large earthquake preparation testifies probably for the deep fluid involvement in the process. The revealed in LEGV typical features of development of shear instability agree well with results obtained in laboratory acoustic emission (AE) study. Majority of the revealed anomalies appear to have a secondary character and are connected mainly with an increase in a mean earthquake magnitude in LEGV. The mean magnitude increase was shown to be connected mainly with a decrease of a portion of moderate size events (Mw 5.0 - 5.5) in a closer GLE vicinity. We believe that this deficit of moderate size events hardly can be

  3. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  4. W phase source inversion using high-rate regional GPS data for large earthquakes

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F.; Melgar, D.; Benavente, R.; Geng, J.; Barrientos, S.; Campos, J.

    2016-04-01

    W phase moment tensor inversion has proven to be a reliable method for rapid characterization of large earthquakes. For global purposes it is used at the United States Geological Survey, Pacific Tsunami Warning Center, and Institut de Physique du Globe de Strasbourg. These implementations provide moment tensors within 30-60 min after the origin time of moderate and large worldwide earthquakes. Currently, the method relies on broadband seismometers, which clip in the near field. To ameliorate this, we extend the algorithm to regional records from high-rate GPS data and retrospectively apply it to six large earthquakes that occurred in the past 5 years in areas with relatively dense station coverage. These events show that the solutions could potentially be available 4-5 min from origin time. Continuously improving GPS station availability and real-time positioning solutions will provide significant enhancements to the algorithm.

  5. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  6. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Astrophysics Data System (ADS)

    Eimer, Joseph; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Marriage, T.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an array of telescopes designed to search for the signature of inflation in the polarization of the Cosmic Microwave Background (CMB). By combining the strategy of targeting large scales (>2 deg) with novel front-end polarization modulation and novel detectors at multiple frequencies, CLASS will pioneer a new frontier in ground-based CMB polarization surveys. In this talk, I give an overview of the CLASS instrument, survey, and outlook on setting important new limits on the energy scale of inflation.

  7. 3D Spontaneous Rupture Models of Large Earthquakes on the Hayward Fault, California

    NASA Astrophysics Data System (ADS)

    Barall, M.; Harris, R. A.; Simpson, R. W.

    2008-12-01

    We are constructing 3D spontaneous rupture computer simulations of large earthquakes on the Hayward and central Calaveras faults. The Hayward fault has a geologic history of producing many large earthquakes (Lienkaemper and Williams, 2007), with its most recent large event a M6.8 earthquake in 1868. Future large earthquakes on the Hayward fault are not only possible, but probable (WGCEP, 2008). Our numerical simulation efforts use information about the complex 3D fault geometry of the Hayward and Calaveras faults and information about the geology and physical properties of the rocks that surround the Hayward and Calaveras faults (Graymer et al., 2005). Initial stresses on the fault surface are inferred from geodetic observations (Schmidt et al., 2005), seismological studies (Hardebeck and Aron, 2008), and from rate-and- state simulations of the interseismic interval (Stuart et al., 2008). In addition, friction properties on the fault surface are inferred from laboratory measurements of adjacent rock types (Morrow et al., 2008). We incorporate these details into forward 3D computer simulations of dynamic rupture propagation, using the FaultMod finite-element code (Barall, 2008). The 3D fault geometry is constructed using a mesh-morphing technique, which starts with a vertical planar fault and then distorts the entire mesh to produce the desired fault geometry. We also employ a grid-doubling technique to create a variable-resolution mesh, with the smallest elements located in a thin layer surrounding the fault surface, which provides the higher resolution needed to model the frictional behavior of the fault. Our goals are to constrain estimates of the lateral and depth extent of future large Hayward earthquakes, and to explore how the behavior of large earthquakes may be affected by interseismic stress accumulation and aseismic slip.

  8. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  9. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  10. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  11. Unexpected geological impacts associated with large earthquakes and tsunamis in northern Honshu, Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Goff, J. R.

    2013-12-01

    Palaeoseismic research in areas adjacent to subduction zones has traditionally been concerned with identifying geological or geomorphological features associated with the immediate effects of past earthquakes, such as tsunamis, uplift or subsidence, with the aim of estimating earthquake magnitude and/or frequency. However, there are also other features in the landscape that can offer some insights into the past earthquake and tsunami history of a region. The study of coastal dune systems as palaeoseismic indicators is still in its infancy, but can provide useful evidence of past large earthquakes and by association, the tsunamis they generated. On a catchment-wide basis, past research has linked a sequence of environmental changes such as forest disturbance, landslides, river aggradation and rapid coastal dune building as geomorphological after-effects (in addition to tsunamis) of a large earthquake. In this model large pulses of sediment created by co-seismic landsliding in the upper catchment are moved rapidly to the coast where they leave a clear signature in the landscape. Coarser sediments form an aggradation surfaces and finer sediments form a new coastal dune or beach ridge. Coastal dune ridge systems are not exclusively associated with seismically active areas, but where they do occur in such places their potential use as palaeoseismic indicators is often ignored. Data are presented first of all about the beach ridges of the Sendai Plain where investigations have been carried out following the 2011 Tohoku-oki earthquake and tsunami. A wider regional picture of both palaeoseismicity, palaeotsunamis and beach ridge formation is then discussed. Existing data indicate a strong correlation between past earthquakes and the timing of beach ridge formation over the past 5000 years, however it seems likely that there is a far more detailed record still preserved in Japan's beach ridges and suggestions are offered on the directions for future research in this area.

  12. Large-scale motions in the universe

    SciTech Connect

    Rubin, V.C.; Coyne, G.V.

    1988-01-01

    The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.

  13. Observations of large earthquakes in the Mexican subduction zone over 110 years

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, Vala; Krishna Singh, Shri; Martínez-Peláez, Liliana; Garza-Girón, Ricardo; Lund, Björn; Ji, Chen

    2016-04-01

    Fault slip during an earthquake is observed to be highly heterogeneous, with areas of large slip interspersed with areas of smaller or even no slip. The cause of the heterogeneity is debated. One hypothesis is that the frictional properties on the fault are heterogeneous. The parts of the rupture surface that have large slip during earthquakes are coupled more strongly, whereas the areas in between and around creep continuously or episodically. The continuously or episodically creeping areas can partly release strain energy through aseismic slip during the interseismic period, resulting in relatively lower prestress than on the coupled areas. This would lead to subsequent earthquakes having large slip in the same place, or persistent asperities. A second hypothesis is that in the absence of creeping sections, the prestress is governed mainly by the accumulative stress change associated with previous earthquakes. Assuming homogeneous frictional properties on the fault, a larger prestress results in larger slip, i.e. the next earthquake may have large slip where there was little or no slip in the previous earthquake, which translates to non-persistent asperities. The study of earthquake cycles are hampered by short time period for which high quality, broadband seismological and accelerographic records, needed for detailed studies of slip distributions, are available. The earthquake cycle in the Mexican subduction zone is relatively short, with about 30 years between large events in many places. We are therefore entering a period for which we have good records for two subsequent events occurring in the same segment of the subduction zone. In this study we compare seismograms recorded either at the Wiechert seismograph or on a modern broadband seismometer located in Uppsala, Sweden for subsequent earthquakes in the Mexican subduction zone rupturing the same patch. The Wiechert seismograph is unique in the sense that it recorded continuously for more than 80 years

  14. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  15. Earthquake geology of Kashmir Basin and its implications for future large earthquakes

    NASA Astrophysics Data System (ADS)

    Shah, A. A.

    2013-02-01

    Two major traces of active thrust faults were identified in the Kashmir Basin (KB) using satellite images and by mapping active geomorphic features. The ~N130°E strike of the mapped thrust faults is consistent with the regional ~NE-SW convergence along the Indian-Eurasian collision zone. The ~NE dipping thrust faults have uplifted the young alluvial fan surfaces at the SW side of the KB. This created a major tectono-geomorphic boundary along the entire strike length of the KB that is characterised by (1) a low relief with sediment-filled sluggish streams to the SE and (2) an uplifted region, with actively flowing streams to the SW. The overall tectono-geomorphic expression suggests that recent activity along these faults has tilted the entire Kashmir valley towards NE. Further, the Mw 7.6 earthquake, which struck Northern Pakistan and Kashmir on 8 October 2005, also suggests a similar strike and NE dipping fault plane, which could indicate that the KB fault is continuous over a distance of ~210 km and connects on the west with the Balakot Bagh fault. However, the geomorphic and the structural evidences of such a structure are not very apparent on the north-west, which thus suggest that it is not a contiguous structure with the Balakot Bagh fault. Therefore, it is more likely that the KB fault is an independent thrust, a possible ramp on the Main Himalayan Thrust, which has uplifting the SW portion of the KB and drowning everything to the NE (e.g. Madden et al. 2011). Furthermore, it seems very likely that the KB fault could be a right stepping segment of the Balakot Bagh fault, similar to Riasi Thrust, as proposed by Thakur et al. (2010). The earthquake magnitude is measured by estimating the fault rupture parameters (e.g. Wells and Coppersmith in Bull Seismol Soc Am 84:974-1002, 1994). Therefore, the total strike length of the mapped KB fault is ~120 km and by assuming a dip of 29° (Avouac et al. in Earth Planet Sci Lett 249:514-528, 2006) and a down-dip limit

  16. Precursory measure of interoccurrence time associated with large earthquakes in the Burridge-Knopoff model

    SciTech Connect

    Hasumi, Tomohiro

    2008-11-13

    We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

  17. Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes

    NASA Astrophysics Data System (ADS)

    Mignan, Arnaud; Di Giovambattista, Rita

    2008-08-01

    The Non-Critical Precursory Accelerating Seismicity Theory (PAST) has been proposed recently to explain the formation of accelerating seismicity (increase of the a-value) observed before large earthquakes. In particular, it predicts that precursory accelerating seismicity should occur in the same spatiotemporal window as quiescence. In this first combined study we start by determining the spatiotemporal extent of quiescence observed prior to the 1997 Mw = 6 Umbria-Marche earthquake, Italy, using the RTL (Region-Time-Length) algorithm. We then show that background events located in that spatiotemporal window form a clear acceleration, as expected by the Non-Critical PAST. This result is a step forward in the understanding of precursory seismicity by relating two of the principal patterns that can precede large earthquakes

  18. Basin-scale transport of heat and fluid induced by earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Yuen; Wang, Lee-Ping; Manga, Michael; Wang, Chung-Ho; Chen, Chieh-Hung

    2013-08-01

    Large earthquakes are known to cause widespread changes in groundwater flow, yet their relation to subsurface transport is unknown. Here we report systematic changes in groundwater temperature after the 1999 Mw7.6 Chi-Chi earthquake in central Taiwan, documented by a dense network of monitoring wells over a large (17,000 km2) alluvial fan near the epicenter. Analysis of the data reveals a hitherto unknown system of earthquake-triggered basin-wide groundwater flow, which scavenges geothermal heat from depths, changing groundwater temperature across the basin. The newly identified earthquake-triggered groundwater flow may have significant implications on postseismic groundwater supply and quality, contaminant transport, underground repository safety, and hydrocarbon production.

  19. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  20. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    USGS Publications Warehouse

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  1. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  2. Relay chatter and operator response after a large earthquake: An improved PRA methodology with case studies

    SciTech Connect

    Budnitz, R.J.; Lambert, H.E.; Hill, E.E.

    1987-08-01

    The purpose of this project has been to develop and demonstrate improvements in the PRA methodology used for analyzing earthquake-induced accidents at nuclear power reactors. Specifically, the project addresses methodological weaknesses in the PRA systems analysis used for studying post-earthquake relay chatter and for quantifying human response under high stress. An improved PRA methodology for relay-chatter analysis is developed, and its use is demonstrated through analysis of the Zion-1 and LaSalle-2 reactors as case studies. This demonstration analysis is intended to show that the methodology can be applied in actual cases, and the numerical values of core-damage frequency are not realistic. The analysis relies on SSMRP-based methodologies and data bases. For both Zion-1 and LaSalle-2, assuming that loss of offsite power (LOSP) occurs after a large earthquake and that there are no operator recovery actions, the analysis finds very many combinations (Boolean minimal cut sets) involving chatter of three or four relays and/or pressure switch contacts. The analysis finds that the number of min-cut-set combinations is so large that there is a very high likelihood (of the order of unity) that at least one combination will occur after earthquake-caused LOSP. This conclusion depends in detail on the fragility curves and response assumptions used for chatter. Core-damage frequencies are calculated, but they are probably pessimistic because assuming zero credit for operator recovery is pessimistic. The project has also developed an improved PRA methodology for quantifying operator error under high-stress conditions such as after a large earthquake. Single-operator and multiple-operator error rates are developed, and a case study involving an 8-step procedure (establishing feed-and-bleed in a PWR after an earthquake-initiated accident) is used to demonstrate the methodology.

  3. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    NASA Astrophysics Data System (ADS)

    Noda, Shunta; Ellsworth, William L.

    2016-09-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  4. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  5. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  6. Complex Nucleation Process of Large North Chile Earthquakes, Implications for Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Meneses, G.; Sobiesiak, M.; Madariaga, R. I.

    2014-12-01

    We studied the nucleation process of Northern Chile events that included the large earthquakes of Tocopilla 2007 Mw 7.8 and Iquique 2014 Mw 8.1, as well as the background seismicity recorded from 2011 to 2013 by the ILN temporary network and the IPOC and CSN permanent networks. We built our catalogue of 393 events starting from the CSN catalogue, which has a completeness of magnitude Mw > 3.0 in Northern Chile. We re-located and computed moment magnitude for each event. We also computed Early Warning (EW) parameters - Pd, Pv, τc and IV2 - for each event including 13 earthquakes of Mw>6.0 that occurred between 2007-2012. We also included part of the seismicity from March-April 2014 period. We find that Pd, Pv and IV2 are good estimators of magnitude for interplate thrust and intraplate intermediate depth events with Mw between 4.0 and 6.0. However, the larger magnitude events show a saturation of the EW parameters. The Tocopilla 2007 and Iquique 2014 earthquake sequences were studied in detail. Almost all events with Mw>6.0 present precursory signals so that the largest amplitudes occur several seconds after the first P wave arrival. The recent Mw 8.1 Iquique 2014 earthquake was preceded by low amplitude P waves for 20 s before the main asperity was broken. The magnitude estimation can improve if we consider longer P wave windows in the estimation of EW parameters. There was, however, a practical limit during the Iquique earthquake because the first S waves arrived before the arrival of the P waves from the main rupture. The 4 s P-wave Pd parameter estimated Mw 7.1 for the Mw 8.1 Iquique 2014 earthquake and Mw 7.5 for the Mw 7.8 Tocopilla 2007 earthquake.

  7. Active structural growth in central Taiwan in relationship to large earthquakes and pore-fluid pressures

    NASA Astrophysics Data System (ADS)

    Yue, Li-Fan

    Central Taiwan is subject to a substantial long-term earthquake risk with a population of five million and two disastrous earthquakes in the last century, the 1935 ML=7.1 Tuntzuchiao and 1999 Mw=7.6 Chi-Chi earthquakes. Rich data from these earthquakes combined with substantial surface and subsurface data accumulated from petroleum exploration form the basis for these studies of the growth of structures in successive large earthquakes and their relationships to pore-fluid pressures. Chapter 1 documents the structural context of the bedding-parallel Chelungpu thrust that slipped in the Chi-Chi earthquake by showing for this richly instrumented earthquake the close geometric relationships between the complex 3D fault shape and the heterogeneous coseismic displacements constrained by geodesy and seismology. Chapter 2 studies the accumulation of deformation by successive large earthquakes by studying the deformation of flights of fluvial terraces deposited over the Chelungpu and adjacent Changhua thrusts, showing the deformation on a timescale of tens of thousands of years. Furthermore these two structures, involving the same stratigraphic sequence, show fundamentally different kinematics of deformation with associated contrasting hanging-wall structural geometries. The heights and shapes of deformed terraces allowed testing of existing theories of fault-related folding. Furthermore terrace dating constrains a combined shortening rate of 37 mm/yr, which is 45% of the total Taiwan plate-tectonic rate, and indicates a substantial earthquake risk for the Changhua thrust. Chapter 3 addresses the long-standing problem of the mechanics of long-thing thrust sheets, such as the Chelungpu and Changhua thrusts in western Taiwan, by presenting a natural test for the classic Hubbert-Rubey hypothesis, which argues that ambient excess pore-fluid pressure substantially reduces the effective fault friction allowing the thrusts to move. Pore-fluid pressure data obtained from 76 wells

  8. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  9. Particle precipitation prior to large earthquakes of both the Sumatra and Philippine Regions: A statistical analysis

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano

    2015-12-01

    A study of statistical correlation between low L-shell electrons precipitating into the atmosphere and strong earthquakes is presented. More than 11 years of the Medium Energy Protons Electrons Detector data from the NOAA-15 Sun-synchronous polar orbiting satellite were analysed. Electron fluxes were analysed using a set of adiabatic coordinates. From this, significant electron counting rate fluctuations were evidenced during geomagnetic quiet periods. Electron counting rates were compared to earthquakes by defining a seismic event L-shell obtained radially projecting the epicentre geographical positions to a given altitude towards the zenith. Counting rates were grouped in every satellite semi-orbit together with strong seismic events and these were chosen with the L-shell coordinates close to each other. NOAA-15 electron data from July 1998 to December 2011 were compared for nearly 1800 earthquakes with magnitudes larger than or equal to 6, occurring worldwide. When considering 30-100 keV precipitating electrons detected by the vertical NOAA-15 telescope and earthquake epicentre projections at altitudes greater that 1300 km, a significant correlation appeared where a 2-3 h electron precipitation was detected prior to large events in the Sumatra and Philippine Regions. This was in physical agreement with different correlation times obtained from past studies that considered particles with greater energies. The Discussion below of satellite orbits and detectors is useful for future satellite missions for earthquake mitigation.

  10. Unusually large earthquakes inferred from tsunami deposits along the Kuril trench

    USGS Publications Warehouse

    Nanayama, F.; Satake, K.; Furukawa, R.; Shimokawa, K.; Atwater, B.F.; Shigeno, K.; Yamaki, S.

    2003-01-01

    The Pacific plate converges with northeastern Eurasia at a rate of 8-9 m per century along the Kamchatka, Kuril and Japan trenches. Along the southern Kuril trench, which faces the Japanese island of Hokkaido, this fast subduction has recurrently generated earthquakes with magnitudes of up to ???8 over the past two centuries. These historical events, on rupture segments 100-200 km long, have been considered characteristic of Hokkaido's plate-boundary earthquakes. But here we use deposits of prehistoric tsunamis to infer the infrequent occurrence of larger earthquakes generated from longer ruptures. Many of these tsunami deposits form sheets of sand that extend kilometres inland from the deposits of historical tsunamis. Stratigraphic series of extensive sand sheets, intercalated with dated volcanic-ash layers, show that such unusually large tsunamis occurred about every 500 years on average over the past 2,000-7,000 years, most recently ???350 years ago. Numerical simulations of these tsunamis are best explained by earthquakes that individually rupture multiple segments along the southern Kuril trench. We infer that such multi-segment earthquakes persistently recur among a larger number of single-segment events.

  11. Thrusting of the Hindu Kush over the Southeastern Tadjik Basin, Afghanistan: Evidence from two large earthquakes

    NASA Astrophysics Data System (ADS)

    Abers, Geoffrey; Bryan, Carol; Roecker, Steven; McCaffrey, Robert

    1988-02-01

    We infer from the mechanisms and depths of two large earthquakes that the Hindu Kush is actively thrusting northwest over the Tadjik basin and that the basin is closing rather than being displaced to the west. Teleseismic body waves were used to determine focal mechanisms and depths for the two largest shallow earthquakes on the southern edge of the basin. The two earthquakes, on June 24, 1972 (mb=6.0), and December 16, 1982 (mb=6.2), have seismic moments of 2 × 1018 N-m and 6 × 1018 N-m, respectively. Focal mechanisms of both events indicate almost pure thrust faulting with nodal planes striking northeast-southwest. The inferred fault planes dip southeast, at 20° for the first event and 50° for the second. The P axes for both events are oblique to the direction of relative motion between India and Asia, suggesting that the Pamir is overthrusting the basin to the west. Depths for both earthquakes are between 20 and 25 km and place them well below the Tadjik basin sediments. The depths and steep fault planes suggest that these earthquakes represent a downdip extension within the basement of shallow folding and thrusting seen in the sediments northwest of the events. Thus convergence in Afghanistan between India and Eurasia is taken up along southeast dipping thrust faults north of the Hindu Kush as well as by northward subduction under the southern part of the range.

  12. Unusually large earthquakes inferred from tsunami deposits along the Kuril trench.

    PubMed

    Nanayama, Futoshi; Satake, Kenji; Furukawa, Ryuta; Shimokawa, Koichi; Atwater, Brian F; Shigeno, Kiyoyuki; Yamaki, Shigeru

    2003-08-01

    The Pacific plate converges with northeastern Eurasia at a rate of 8-9 m per century along the Kamchatka, Kuril and Japan trenches. Along the southern Kuril trench, which faces the Japanese island of Hokkaido, this fast subduction has recurrently generated earthquakes with magnitudes of up to approximately 8 over the past two centuries. These historical events, on rupture segments 100-200 km long, have been considered characteristic of Hokkaido's plate-boundary earthquakes. But here we use deposits of prehistoric tsunamis to infer the infrequent occurrence of larger earthquakes generated from longer ruptures. Many of these tsunami deposits form sheets of sand that extend kilometres inland from the deposits of historical tsunamis. Stratigraphic series of extensive sand sheets, intercalated with dated volcanic-ash layers, show that such unusually large tsunamis occurred about every 500 years on average over the past 2,000-7,000 years, most recently approximately 350 years ago. Numerical simulations of these tsunamis are best explained by earthquakes that individually rupture multiple segments along the southern Kuril trench. We infer that such multi-segment earthquakes persistently recur among a larger number of single-segment events.

  13. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  14. Large-scale neuromorphic computing systems.

    PubMed

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  15. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  16. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  17. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  18. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  19. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  20. Evidence for earthquake triggering of large landslides in coastal Oregon, USA

    USGS Publications Warehouse

    Schulz, W.H.; Galloway, S.L.; Higgins, J.D.

    2012-01-01

    Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement rates, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great earthquakes every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or earthquake-induced ground motion using field observations and modeling of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement rates suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia earthquake was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support earthquake ground motion as their triggering mechanism. Limit-equilibrium slope-stability modeling suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. Modeling suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone earthquake would trigger formation of the rockslides. Displacement modeling following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff

  1. Magnitudes and Moment-Duration Scaling of Low-Frequency Earthquakes Beneath Southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A.; Rubin, A. M.; Savard, G.; Chuang, L. Y.

    2015-12-01

    We employ 130 low-frequency-earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from 100's to 1000's of individual LFEs, representing over 300,000 independent detections from major episodic-tremor-and- slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P- and S-waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatio-temporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free-surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single ETS template. The spatio-temporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 hours of LFE activity during an ETS episode when tidal sensitity is low. The remainder is released in bursts over several days, particularly as spatially extensive RTRs, during which tidal sensitivity is high. RTR's are characterized by large magnitude LFEs, and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power-law than exponential distributions although they exhibit very high b-values ≥ 6. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges MW<1.5, MW≥ 2.0. LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in dimension and that moment variation is dominated by slip. This behaviour implies

  2. Source and Aftershock Analysis of a Large Deep Earthquake in the Tonga Flat Slab

    NASA Astrophysics Data System (ADS)

    Cai, C.; Wiens, D. A.; Warren, L. M.

    2013-12-01

    The 9 November 2009 (Mw 7.3) deep focus earthquake (depth = 591 km) occurred in the Tonga flat slab region, which is characterized by limited seismicity but has been imaged as a flat slab in tomographic imaging studies. In addition, this earthquake occurred immediately beneath the largest of the Fiji Islands and was well recorded by a temporary array of 16 broadband seismographs installed in Fiji and Tonga, providing an excellent opportunity to study the source mechanism of a deep earthquake in a partially aseismic flat slab region. We determine the positions of main shock hypocenter, its aftershocks and moment release subevents relative to the background seismicity using a hypocentroidal decomposition relative relocation method. We also investigate the rupture directivity by measuring the variation of rupture durations at different azimuth [e.g., Warren and Silver, 2006]. Arrival times picked from the local seismic stations together with teleseismic arrival times from the International Seismological Centre (ISC) are used for the relocation. Teleseismic waveforms are used for directivity study. Preliminary results show this entire region is relatively aseismic, with diffuse background seismicity distributed between 550-670 km. The main shock happened in a previously aseismic region, with only 1 small earthquake within 50 km during 1980-2012. 11 aftershocks large enough for good locations all occurred within the first 24 hours following the earthquake. The aftershock zone extends about 80 km from NW to SE, covering a much larger area than the mainshock rupture. The aftershock distribution does not correspond to the main shock fault plane, unlike the 1994 March 9 (Mw 7.6) Fiji-Tonga earthquake in the steeply dipping, highly seismic part of the Tonga slab. Mainshock subevent locations suggest a sub-horizontal SE-NW rupture direction. However, the directivity study shows a complicated rupture process which could not be solved with simple rupture assumption. We will

  3. “PLAFKER RULE OF THUMB” RELOADED: EXPERIMENTAL INSIGHTS INTO THE SCALING AND VARIABILITY OF LOCAL TSUNAMIS TRIGGERED BY GREAT SUBDUCTION MEGATHRUST EARTHQUAKES

    NASA Astrophysics Data System (ADS)

    Rosenau, M.; Nerlich, R.; Brune, S.; Oncken, O.

    2009-12-01

    along accretionary margins. Three out of the top-five tsunami hotspots we identify had giant earthquakes in the last decades (Chile 1960, Alaska 1964, Sumatra-Andaman 2004) and one (Sumatra-Mentawai) started in 2005 releasing strain in a possibly moderate mode of sequential large earthquakes. This leaves Cascadia as the major active tsunami hotspot in the focus of tsunami hazard assessment. Visualization of preliminary versions of the experimentally-derived scaling laws for peak nearshore tsunami heigth (PNTH) as functions of forearc slope, peak earthquake slip (left panel) and moment magnitude (right panel). Note that wave breaking is not considered yet. This renders the extrem peaks > 20 m unrealistic.

  4. Long-period ocean-bottom motions in the source areas of large subduction earthquakes.

    PubMed

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present.

  5. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    PubMed Central

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10–20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  6. Backscatter in Large-Scale Flows

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu

    2009-11-01

    Downgradient mixing of potential-voriticity and its variants are commonly employed to model the effects of unresolved geostrophic turbulence on resolved scales. This is motivated by the (inviscid and unforced) particle-wise conservation of potential-vorticity and the mean forward or down-scale cascade of potential enstrophy in geostrophic turubulence. By examining the statistical distribution of the transfer of potential enstrophy from mean or filtered motions to eddy or sub-filter motions, we find that the mean forward cascade results from the forward-scatter being only slightly greater than the backscatter. Downgradient mixing ideas, do not recognize such equitable mean-eddy or large scale-small scale interactions and consequently model only the mean effect of forward cascade; the importance of capturing the effects of backscatter---the forcing of resolved scales by unresolved scales---are only beginning to be recognized. While recent attempts to model the effects of backscatter on resolved scales have taken a stochastic approach, our analysis suggests that these effects are amenable to being modeled deterministically.

  7. How Large can Mexican Subduction Earthquakes be? Evidence of a Very Large Event in 1787 (M~8.5)

    NASA Astrophysics Data System (ADS)

    Suarez, G.

    2007-05-01

    A sequence of very strong earthquakes occurred from 28 March to 18 April, 1787. The first earthquake on 28 March, appears to be the largest of the sequence followed by three strong events on 29 and 30 March, and 3 April; strong aftershocks continued to be reported until 18 April. The event of 28 March was strongly felt and caused damage in Mexico City, where several buildings were reported to suffer. The strongest effects, however, were observed on the southeastern coast of Guerrero and Oaxaca. Intensities greater than 8 (MMI) were observed along the coast over a distance of about 400 km. The towns of Ometepec, Jamiltepec and Tehuantepec reported strong damage to local churches and other apparently well-constructed buildings. In contrast to the low intensities observed during the coastal Oaxaca earthquakes of 1965, 1968 and 1978, Oaxaca City reports damage equivalent to intensity 8 to 9 on 28 March, 1787. An unusual effect of this earthquake on the Mexican subduction zone was the presence of a very large tsunami. Three different sources report that in the area known as the Barra de Alotengo (16.2N, 98.2 W) the sea retreated for a distance of about one Spanish league (4.1 km). A large wave came back and invaded land for approximately 1.5 leagues (6.2 km). Several local ranchers were swept away by the coming wave. Along the coast near the town of Tehuantepec, about 400 km to the southeast of Alotengo a tsunami was also reported to have stranded fish and shellfish inland; in this case no description of the distance penetrated by the tsunami is reported. It is also described that in Acapulco, some 200 km to the northwest of Alotengo, a strong wave was observed and that the sea remained agitated for a whole day. Assumming that the subduction zone ruptured from somewhere near Alotengo to the coast Tehuantepec, the resulting fault lenght is about 400 to 450 km. This large fault rupture contrasts with the seismic cycle of the Oaxaca coast observed during this century where

  8. Scale up of large ALON windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Balasubramanian, Sreeram; Kashalikar, Uday; Foti, Robyn; Sastri, Suri

    2013-06-01

    Aluminum Oxynitride (ALON® Optical Ceramic) combines broadband transparency with excellent mechanical properties. ALON's cubic structure means that it is transparent in its polycrystalline form, allowing it to be manufactured by conventional powder processing techniques. Surmet has established a robust manufacturing process, beginning with synthesis of ALON® powder, continuing through forming/heat treatment of blanks, and ending with optical fabrication of ALON® windows. Surmet has made significant progress in our production capability in recent years. Additional scale up of Surmet's manufacturing capability, for larger sizes and higher quantities, is currently underway. ALON® transparent armor represents the state of the art in protection against armor piercing threats, offering a factor of two in weight and thickness savings over conventional glass laminates. Tiled and monolithic windows have been successfully produced and tested against a range of threats. Large ALON® window are also of interest to a range of visible to Mid-Wave Infra-Red (MWIR) sensor applications. These applications often have stressing imaging requirements which in turn require that these large windows have optical characteristics including excellent homogeneity of index of refraction and very low stress birefringence. Surmet is currently scaling up its production facility to be able to make and deliver ALON® monolithic windows as large as ~19x36-in. Additionally, Surmet has plans to scale up to windows ~3ftx3ft in size in the coming years. Recent results with scale up and characterization of the resulting blanks will be presented.

  9. Discrete Scaling in Earthquake Precursory Phenomena: Evidence in the Kobe Earthquake, Japan

    NASA Astrophysics Data System (ADS)

    Johansen, Anders; Sornette, Didier; Wakita, Hiroshi; Tsunogai, Urumu; Newman, William I.; Saleur, Hubert

    1996-10-01

    We analyze the ion concentration of groundwater issuing from deep wells located near the epicenter of the recent earthquake of magnitude 6.9 near Kobe, Japan, on January 17, 1995. These concentrations are well fitted by log-periodic modulations around a leading power law. The exponent (real and imaginary parts) is very close to those already found for the fits of precursory seismic activity for Loma Prieta and the Aleutian Islands. This brings further support for the general hypothesis that complex critical exponents are a general phenomenon in irreversible self-organizing systems and particularly in rupture and earthquake phenomena. Nous analysons les fluctuations de concentrations ioniques de l'eau issue de puits profonds situés à proximité de l'épicentre du récent tremblement de terre de magnitude 6.9 proche de Kobe au Japon, le 17 janvier 1995. Ces fluctuations sont bien reproduites par des modulations log-périodiques autour d'une loi de puissance. Les parties réelle et imaginaire de l'exposant sont très proches de celles trouvées précédemment pour les tremblements de terre de Loma Prieta et des Iles Aléoutiennes. Ces résultats renforcent l'hypothèse que des exposants critiques complexes sont une propriété générale des phénomènes de croissance irréversible, et en particulier des problèmes de rupture et des tremblements de terre.

  10. Galaxy alignment on large and small scales

    NASA Astrophysics Data System (ADS)

    Kang, X.; Lin, W. P.; Dong, X.; Wang, Y. O.; Dutton, A.; Macciò, A.

    2016-10-01

    Galaxies are not randomly distributed across the universe but showing different kinds of alignment on different scales. On small scales satellite galaxies have a tendency to distribute along the major axis of the central galaxy, with dependence on galaxy properties that both red satellites and centrals have stronger alignment than their blue counterparts. On large scales, it is found that the major axes of Luminous Red Galaxies (LRGs) have correlation up to 30Mpc/h. Using hydro-dynamical simulation with star formation, we investigate the origin of galaxy alignment on different scales. It is found that most red satellite galaxies stay in the inner region of dark matter halo inside which the shape of central galaxy is well aligned with the dark matter distribution. Red centrals have stronger alignment than blue ones as they live in massive haloes and the central galaxy-halo alignment increases with halo mass. On large scales, the alignment of LRGs is also from the galaxy-halo shape correlation, but with some extent of mis-alignment. The massive haloes have stronger alignment than haloes in filament which connect massive haloes. This is contrary to the naive expectation that cosmic filament is the cause of halo alignment.

  11. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  12. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide.

    PubMed

    Pollitz, Fred F; Stein, Ross S; Sevilgen, Volkan; Bürgmann, Roland

    2012-10-11

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days, but so far remote aftershocks of moment magnitude M ≥ 5.5 have not been identified, with the lone exception of an M = 6.9 quake remotely triggered by the surface waves from an M = 6.6 quake 4,800 kilometres away. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M ≥ 5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M ≤ 7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10(-7) for at least 100 seconds during dynamic-wave passage. The other M ≥ 8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M ≥ 5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  13. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide.

    PubMed

    Pollitz, Fred F; Stein, Ross S; Sevilgen, Volkan; Bürgmann, Roland

    2012-10-11

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days, but so far remote aftershocks of moment magnitude M ≥ 5.5 have not been identified, with the lone exception of an M = 6.9 quake remotely triggered by the surface waves from an M = 6.6 quake 4,800 kilometres away. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M ≥ 5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M ≤ 7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10(-7) for at least 100 seconds during dynamic-wave passage. The other M ≥ 8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M ≥ 5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure. PMID:23023131

  14. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide

    USGS Publications Warehouse

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland

    2012-01-01

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  15. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  16. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  17. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  18. Scaling and Stress Release in the Darfield-Christchurch, New Zealand Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.; Fry, B.; Doser, D. I.

    2014-12-01

    The Canterbury earthquake sequence began with the M7.1 Darfield earthquake in 2010, and includes the devastating M6.2 Christchurch earthquake in 2011. The high ground accelerations and damage in Christchurch suggested that the larger eartthquakes may be high stress drop events. This is consistent with the hypothesis that faults in low-strain rate regions with long inter-event times rupture in higher stress drop earthquakes. The wide magnitude range of this prolific sequence, and the high-quality recording enable us to test this. The spatial migration of the sequence, from Darfield through Christchurch and then offshore, enables us to investigate whether we can resolve any spatial or temporal variation in earthquake stress drop. An independent study of 500 aftershocks (Oth & Kaiser, 2014) found no magnitude dependence, and identified spatially varying stress drop. Such patterns can be more confidently interpreted if observed by independent studies using different approaches. We use a direct wave, empirical Green's function (EGF) approach that includes measurement uncertainties, and objective criteria for assessing the quality of each spectral ratio (Abercrombie, 2013). The large number of earthquakes in the sequence enables us to apply the same approach to a wide range of magnitudes (M~2-6) recorded at the same stations, and so minimize the effects of any systematic biases in results. In our preliminary study, we include 2500 earthquakes recorded at a number of strong motion and broadband stations. We use multiple EGFs for each event, and find 300 earthquakes with well-resolved ratios at 5 or more stations. The stress drops are magnitude independent and there is broad correlation with the results of Oth & Kaiser. We apply the same approach to a much larger data set and compare our results to those of Oth & Kaiser, and also to other regions studied using our EGF method.

  19. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  20. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  1. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  2. Scaling of Seismic Moment with Recurrence Interval for Small Repeating Earthquakes Simulated on Rate-and-State Faults

    NASA Astrophysics Data System (ADS)

    Chen, T.; Lapusta, N.

    2006-12-01

    Observations suggest that the recurrence time T and seismic moment M0 of small repeating earthquakes in Parkfield scale as T∝ M_0^{0.17 (Nadeau and Johnson, 1998). However, a simple conceptual model of these earthquakes as circular ruptures with stress drop independent of the seismic moment and slip that is proportional to the recurrence time T results in T∝ M_0^{1/3}. Several explanations for this discrepancy have been proposed. Nadeau and Johnson (1998) suggested that stress drop depends on the seismic moment and is much higher for small events than typical estimates based on seismic spectra. Sammis and Rice (2001) modeled repeating earthquakes at a border between large locked and creeping patches to get T∝ M_0^{1/6} and reasonable stress drops. Beeler et al. (2001) considered a fixed-area patch governed by a conceptual law that incorporated strain-hardening and showed that aseismic slip on the patch can explain the observed scaling relation. In this study, we provide an alternative physical basis, grounded in laboratory-derived rate and state friction laws, for the idea of Beeler at el. (2001) that much of the overall slip at the places of small repeating earthquakes may be accumulated aseismically. We simulate repeating events in a 3D model of a strike-slip fault imbedded into an elastic space and governed by rate and state friction laws. The fault has a small circular patch (2-20 m in diameter) with steady-state rate-weakening properties, with the rest of the fault governed by steady-state rate strengthening. The simulated fault segment is 40 m by 40 m, with periodic boundary conditions. We use values of rate and state parameters typical of laboratory experiments, with characteristic slip of order several microns. The model incorporates tectonic-like loading equivalent to the plate rate of 23 mm/year and all dynamic effects during unstable sliding. Our simulations use the 3D methodology of Liu and Lapusta (AGU, 2005) and fully resolve all aspects of

  3. Stress drop Scaling and Stress Release in the Darfield-Christchurch, New Zealand Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.; Fry, B.; Gerstenberger, M. C.; Doser, D. I.; Bannister, S. C.

    2012-12-01

    earthquake sequence. The large number of earthquakes in the sequence enables us to apply the same approach to a wide range of magnitudes (M~2-6), recorded at the same stations, and so minimize the effects of any systematic biases in results. There are also sufficient stations to investigate inter-station variation and average out any directivity and focal mechanism dependence. We follow Viegas et al. (2010) and Abercrombie (2012) to calculate stress drop using direct P and S waves for groups of earthquakes (M>2) surrounding each of the M5.5 events. In each subset we investigate whether stress drop shows any dependence on magnitude. The spatial migration of the sequence from Darfield through Christchurch and then offshore enables us also to investigate whether we can resolve any spatial or temporal variation within stress drop. We also compare our results to those from the north-eastern USA using the same approach Viegas et al., 2010).

  4. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  5. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the

  6. Rotation change in the orientation of the centre-of-figure frame caused by large earthquakes

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangcun; Sun, Wenke; Jin, Shuanggen; Sun, Heping; Xu, Jianqiao

    2016-08-01

    A method to estimate the rotation change in the orientation of the centre-of-figure (CF) frame caused by earthquakes is proposed for the first time. This method involves using the point dislocation theory based on a spherical, non-rotating, perfectly elastic and isotropic (SNREI) Earth. The rotation change in the orientation is related solely to the toroidal displacements of degree one induced by the vertical dip slip dislocation, and the spheroidal displacements induced by an earthquake have no contribution. The effects of two recent large earthquakes, the 2004 Sumatra and the 2011 Tohoku-Oki, are studied. Results showed that the Sumatra and Tohoku-Oki earthquakes both caused the CF frame to rotate by at least tens of μas (micro-arc-second). Although the visible co-seismic displacements are identified and removed from the coordinate time-series, the rotation change due to the unidentified ones and errors in removal is non-negligible. Therefore, the rotation change in the orientation of the CF frame due to seismic deformation should be taken into account in the future in reference frame and geodesy applications.

  7. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  8. Documenting large earthquakes similar to the 2011 Tohoku-oki earthquake from sediments deposited in the Japan Trench over the past 1500 years

    NASA Astrophysics Data System (ADS)

    Ikehara, Ken; Kanamatsu, Toshiya; Nagahashi, Yoshitaka; Strasser, Michael; Fink, Hiske; Usami, Kazuko; Irino, Tomohisa; Wefer, Gerold

    2016-07-01

    The 2011 Tohoku-oki earthquake and tsunami was the most destructive geohazard in Japanese history. However, little is known of the past recurrence of large earthquakes along the Japan Trench. Deep-sea turbidites are potential candidates for understanding the history of such earthquakes. Core samples were collected from three thick turbidite units on the Japan Trench floor near the epicenter of the 2011 event. The uppermost unit (Unit TT1) consists of amalgamated diatomaceous mud (30-60 cm thick) that deposited from turbidity currents triggered by shallow subsurface instability on the lower trench slope associated with strong ground motion during the 2011 Tohoku-oki earthquake. Older thick turbidite units (Units TT2 and TT3) also consist of several amalgamated subunits that contain thick sand layers in their lower parts. Sedimentological characteristics and tectonic and bathymetric settings of the Japan Trench floor indicate that these turbidites also originated from two older large earthquakes of potentially similar to the 2011 Tohoku-oki earthquake. A thin tephra layer between Units TT2 and TT3 constrains the age of these earthquakes. Geochemical analysis of volcanic glass shards within the tephra layer indicates that it is correlative to the Towada-a tephra (AD 915) from the Towada volcano in northeastern Japan. The stratigraphy of the Japan Trench turbidites resembles that of onshore tsunami deposits on the Sendai and Ishinomaki plains, indicating that the cored uppermost succession of the Japan Trench comprises a 1500-yr-old record that includes the sedimentary fingerprint of the historical Jogan earthquake of AD 869.

  9. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    NASA Astrophysics Data System (ADS)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  10. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  11. Rupture process of large earthquakes in the northern Mexico subduction zone

    NASA Astrophysics Data System (ADS)

    Ruff, Larry J.; Miller, Angus D.

    1994-03-01

    The Cocos plate subducts beneath North America at the Mexico trench. The northernmost segment of this trench, between the Orozco and Rivera fracture zones, has ruptured in a sequence of five large earthquakes from 1973 to 1985; the Jan. 30, 1973 Colima event ( M s 7.5) at the northern end of the segment near Rivera fracture zone; the Mar. 14, 1979 Petatlan event ( M s 7.6) at the southern end of the segment on the Orozco fracture zone; the Oct. 25, 1981 Playa Azul event ( M s 7.3) in the middle of the Michoacan “gap”; the Sept. 19, 1985 Michoacan mainshock ( M s 8.1); and the Sept. 21, 1985 Michoacan aftershock ( M s 7.6) that reruptured part of the Petatlan zone. Body wave inversion for the rupture process of these earthquakes finds the best: earthquake depth; focal mechanism; overall source time function; and seismic moment, for each earthquake. In addition, we have determined spatial concentrations of seismic moment release for the Colima earthquake, and the Michoacan mainshock and aftershock. These spatial concentrations of slip are interpreted as asperities; and the resultant asperity distribution for Mexico is compared to other subduction zones. The body wave inversion technique also determines the Moment Tensor Rate Functions; but there is no evidence for statistically significant changes in the moment tensor during rupture for any of the five earthquakes. An appendix describes the Moment Tensor Rate Functions methodology in detail. The systematic bias between global and regional determinations of epicentral locations in Mexico must be resolved to enable plotting of asperities with aftershocks and geographic features. We have spatially “shifted” all of our results to regional determinations of epicenters. The best point source depths for the five earthquakes are all above 30 km, consistent with the idea that the down-dip edge of the seismogenic plate interface in Mexico is shallow compared to other subduction zones. Consideration of uncertainties in

  12. Novel doorways and resonances in large-scale classical systems

    NASA Astrophysics Data System (ADS)

    Franco-Villafañe, J. A.; Flores, J.; Mateos, J. L.; Méndez-Sánchez, R. A.; Novaro, O.; Seligman, T. H.

    2011-05-01

    We show how the concept of doorway states carries beyond its typical applications and usual concepts. The scale on which it may occur is increased to large classical wave systems. Specifically we analyze the seismic response of sedimentary basins covered by water-logged clays, a rather common situation for urban sites. A model is introduced in which the doorway state is a plane wave propagating in the interface between the sediments and the clay. This wave is produced by the coupling of a Rayleigh and an evanescent SP-wave. This in turn leads to a strong resonant response in the soft clays near the surface of the basin. Our model calculations are compared with measurements during Mexico City earthquakes, showing quite good agreement. This not only provides a transparent explanation of catastrophic resonant seismic response in certain basins but at the same time constitutes up to this date the largest-scale example of the doorway state mechanism in wave scattering. Furthermore the doorway state itself has interesting and rather unusual characteristics.

  13. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  14. Precision Measurement of Large Scale Structure

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2001-01-01

    The purpose of this grant was to develop and to start to apply new precision methods for measuring the power spectrum and redshift distortions from the anticipated new generation of large redshift surveys. A highlight of work completed during the award period was the application of the new methods developed by the PI to measure the real space power spectrum and redshift distortions of the IRAS PSCz survey, published in January 2000. New features of the measurement include: (1) measurement of power over an unprecedentedly broad range of scales, 4.5 decades in wavenumber, from 0.01 to 300 h/Mpc; (2) at linear scales, not one but three power spectra are measured, the galaxy-galaxy, galaxy-velocity, and velocity-velocity power spectra; (3) at linear scales each of the three power spectra is decorrelated within itself, and disentangled from the other two power spectra (the situation is analogous to disentangling scalar and tensor modes in the Cosmic Microwave Background); and (4) at nonlinear scales the measurement extracts not only the real space power spectrum, but also the full line-of-sight pairwise velocity distribution in redshift space.

  15. The SCEC-USGS Dynamic Earthquake Rupture Code Comparison Exercise - Simulations of Large Earthquakes and Strong Ground Motions

    NASA Astrophysics Data System (ADS)

    Harris, R.

    2015-12-01

    I summarize the progress by the Southern California Earthquake Center (SCEC) and U.S. Geological Survey (USGS) Dynamic Rupture Code Comparison Group, that examines if the results produced by multiple researchers' earthquake simulation codes agree with each other when computing benchmark scenarios of dynamically propagating earthquake ruptures. These types of computer simulations have no analytical solutions with which to compare, so we use qualitative and quantitative inter-code comparisons to check if they are operating satisfactorily. To date we have tested the codes against benchmark exercises that incorporate a range of features, including single and multiple planar faults, single rough faults, slip-weakening, rate-state, and thermal pressurization friction, elastic and visco-plastic off-fault behavior, complete stress drops that lead to extreme ground motion, heterogeneous initial stresses, and heterogeneous material (rock) structure. Our goal is reproducibility, and we focus on the types of earthquake-simulation assumptions that have been or will be used in basic studies of earthquake physics, or in direct applications to specific earthquake hazard problems. Our group's goals are to make sure that when our earthquake-simulation codes simulate these types of earthquake scenarios along with the resulting simulated strong ground shaking, that the codes are operating as expected. For more introductory information about our group and our work, please see our group's overview papers, Harris et al., Seismological Research Letters, 2009, and Harris et al., Seismological Research Letters, 2011, along with our website, scecdata.usc.edu/cvws.

  16. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  17. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Marriage, Tobias; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Eimer, J.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    Some of the most compelling inflation models predict a background of primordial gravitational waves (PGW) detectable by their imprint of a curl-like "B-mode" pattern in the polarization of the Cosmic Microwave Background (CMB). The Cosmology Large Angular Scale Surveyor (CLASS) is a novel array of telescopes to measure the B-mode signature of the PGW. By targeting the largest angular scales (>2°) with a multifrequency array, novel polarization modulation and detectors optimized for both control of systematics and sensitivity, CLASS sets itself apart in the field of CMB polarization surveys and opens an exciting new discovery space for the PGW and inflation. This poster presents an overview of the CLASS project.

  18. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  19. Potential for Large Transpressional Earthquakes along the Santa Cruz-Catalina Ridge, California Continental Borderland

    NASA Astrophysics Data System (ADS)

    Legg, M.; Kohler, M. D.; Weeraratne, D. S.; Castillo, C. M.

    2015-12-01

    Transpressional fault systems comprise networks of high-angle strike-slip and more gently-dipping oblique-slip faults. Large oblique-slip earthquakes may involve complex ruptures of multiple faults with both strike-slip and dip-slip. Geophysical data including high-resolution multibeam bathymetry maps, multichannel seismic reflection (MCS) profiles, and relocated seismicity catalogs enable detailed mapping of the 3-D structure of seismogenic fault systems offshore in the California Continental Borderland. Seafloor morphology along the San Clemente fault system displays numerous features associated with active strike-slip faulting including scarps, linear ridges and valleys, and offset channels. Detailed maps of the seafloor faulting have been produced along more than 400 km of the fault zone. Interpretation of fault geometry has been extended to shallow crustal depths using 2-D MCS profiles and to seismogenic depths using catalogs of relocated southern California seismicity. We examine the 3-D fault character along the transpressional Santa Cruz-Catalina Ridge (SCCR) section of the fault system to investigate the potential for large earthquakes involving multi-fault ruptures. The 1981 Santa Barbara Island (M6.0) earthquake was a right-slip event on a vertical fault zone along the northeast flank of the SCCR. Aftershock hypocenters define at least three sub-parallel high-angle fault surfaces that lie beneath a hillside valley. Mainshock rupture for this moderate earthquake appears to have been bilateral, initiating at a small discontinuity in the fault geometry (~5-km pressure ridge) near Kidney Bank. The rupture terminated to the southeast at a significant releasing step-over or bend and to the northeast within a small (~10-km) restraining bend. An aftershock cluster occurred beyond the southeast asperity along the East San Clemente fault. Active transpression is manifest by reverse-slip earthquakes located in the region adjacent to the principal displacement zone

  20. Chronology of historical tsunamis in Mexico and its relation to large earthquakes along the subduction zone

    NASA Astrophysics Data System (ADS)

    Suarez, G.; Mortera, C.

    2013-05-01

    The chronology of historical earthquakes along the subduction zone in Mexico spans a time period of approximately 400 years. Although the population density along the coast of Mexico has always been low, relative to that of central Mexico, several of the large subduction earthquakes reports include references to the presence of tsunamis invading the southern coast of Mexico. Here we present a chronology of historical tsunamis affecting the Pacific coast of Mexico and compare this with the historical record of subduction events and to the existing Mexican and worldwide catalogs of tsunamis in the Pacific basin. Due to the geographical orientation of the Pacific coat of Mexico, tsunamis generated on the other subduction zones of the Pacific have not had damaging effects in the country. Among the tsunamis generated by local earthquakes, the largest one by far is the one produced by the earthquake of 28 March 1787. The reported tsunami has an inundation area that reaches for over 6 km inland. The length of the coast where the tsunami was reported extends for over 450 km. In the last 100 years two large tsunamis have been reported along the Pacific coast of Mexico. On 22 June 1932 a tsunami with reported wave heights of up to 11 m hit the coast of Jalisco and Colima. The town of Cuyutlan was heavily damaged and approximately 50 people lost their lives do to the impact of the tsunami. This unusual tsunami was generated by an aftershock (M 6.9) of the large 3 June 1932 event (M 8.1). The main shock of 3 June did not produce a perceptible tsunami. It has been proposed that the 22 June event is a tsunami earthquake generated on the shallow part of the subduction zone. On 16 November 1925 an unusual tsunami was reported in the town of Zihuatanejo in the state of Guerrero, Mexico. No earthquake on the Pacific rim occurs at the same time as this tsunami and the historical record of hurricanes and tropical storms do not list the presence of a meteorological disturbance that

  1. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  2. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  3. Large-scale dynamics and global warming

    SciTech Connect

    Held, I.M. )

    1993-02-01

    Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.

  4. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  5. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  6. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  7. Global large deep-focus earthquakes: Source process and cascading failure of shear instability as a unified physical mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wen, Lianxing

    2015-08-01

    We apply a multiple source inversion method to systematically study the source processes of 25 large deep-focus (depth >400 km) earthquakes with Mw > 7.0 from 1994 to 2012, based on waveform modeling of P, pP, SH and sSH wave data. The earthquakes are classified into three categories based on spatial distributions and focal mechanisms of the inferred sub-events: 1) category one, with non-planar distribution and variable focal mechanisms of sub-events, represented by the 1994 Mw 8.2 Bolivia earthquake and the 2013 Mw 8.3 Okhotsk earthquake; 2) category two, with planar distribution but focal mechanisms inconsistent with the plane, including eighteen earthquakes; and 3) category three, with planar distribution and focal mechanisms consistent with the plane, including six earthquakes. We discuss possible physical mechanisms for earthquakes in each category in the context of plane rupture, transformational faulting and shear thermal instability. We suggest that the inferred source processes of large deep-focus earthquakes can be best interpreted by cascading failure of shear thermal instabilities in pre-existing weak zones, with the perturbation of stress generated by a shear instability triggering another and focal mechanisms of the sub-events controlled by orientations of the pre-existing weak zones. The proposed mechanism can also explain the observed great variability of focal mechanisms, the presence of large values of CLVD (Compensated Linear Vector Dipole) and the super-shear rupture of deep-focus earthquakes in the previous studies. In addition, our studies suggest existence of relationships of seismic moment ∼ (source duration)3 and moment ∼ (source dimension)3 in large deep-focus earthquakes.

  8. Principles for selecting earthquake motions in engineering design of large dams

    USGS Publications Warehouse

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  9. Hayward Fault: A 50-km-long Locked Patch Regulates Its Large Earthquake Cycle (Invited)

    NASA Astrophysics Data System (ADS)

    Lienkaemper, J. J.; Simpson, R. W.; Williams, P. L.; McFarland, F. S.; Caskey, S. J.

    2010-12-01

    We have documented a chronology of 11 paleoearthquakes on the southern Hayward fault (HS) preceding the Mw6.8, 1868 earthquake. These large earthquakes were both regular and frequent, as indicated by a 0.40 coefficient of variation and mean recurrence interval (MRI) of 161 ± 65 yr (1σ of recurrence intervals). Furthermore, the Oxcal-modeled probability distribution for the average interval resembles a Gaussian rather than a more irregular Brownian passage time distribution. Our revised 3D-modeling of subsurface creep, using newly updated long-term creep rates, now suggests there is only one ~50-km-long locked patch (instead of two), confined laterally between two large patches of deep creep (≥9 km), with an extent consistent with evidence for the 1868 rupture. This locked patch and the fault’s lowest rates of surface creep are approximately centered on HS’s largest bend and a large gabbro body, particularly where the gabbro forms both east and west faces of the fault. We suggest that this locked patch serves as a mechanical capacitor, limiting earthquake size and frequency. The moment accumulation over 161 yr summed on all locked elements of the model reaches Mw6.79, but if half of the moment stored in the creeping elements were to fail dynamically, Mw could reach 6.91. The paleoearthquake histories for nearby faults of the San Francisco Bay region appear to indicate less regular and frequent earthquakes, possibly because most lack the high proportion (40-60%) of aseismic release found on the Hayward fault. The northernmost Hayward fault and Rodgers Creek fault (RCF) appear to rupture only half as frequently as the HS and are separated from the HS by a creep buffer and 5-km wide releasing bend respectively, both tending to limit through-going ruptures. The paleoseismic record allows multi-segment, Hayward fault-RCF ruptures, but does not require it. The 1868 HS rupture preceded the 1906 multi-segmented San Andreas fault (SAF) rupture, perhaps because the HS

  10. Further Investigation into the Seismic Nucleation Phase of Large Earthquakes with a Focus on Strike-Slip Events

    NASA Astrophysics Data System (ADS)

    Burkhart, E.; Ji, C.

    2012-12-01

    The dynamic motion of an earthquake begins abruptly, but is often initiated by a small interval of weak motion called the seismic nucleation phase (SNP), first named by Ellsworth and Beroza (1995). In their study, Ellsworth and Beroza (1995, 1996) concluded that the SNP was detectable in near-source records of all of the 41 M 1 to M 8 earthquakes they investigated, with the SNP accounting for ~0.5% of the total moment and lasting ~1/6 of the total duration. Concentrating on large earthquakes, Ji et al (2010) investigated the SNP of 19 M 8.0 earthquakes since 1994 using a new approach applied to teleseismic broadband data. They found that ~50% of the earthquakes had a detectable SNP. Burkhart and Ji (2011) found that, in 68 M 7.5 to M 8.0 since 1994, the SNP is clearly detectable in 31 events, with 27 events showing no nucleation phase and 10 having too much noise or not enough stations to tell. After making modifications to the stacking code allowing for more specific station choice, these earthquakes have all been re-examined, and a consistent finding is that strike-slip earthquakes are more likely to exhibit a clear SNP than normal or thrust earthquakes. Continuing to investigate these events, this study finds further conclusive evidence that large shallow, continental, and strike-slip earthquakes show a clear SNP. We find that 11 of the 15 strike-slip earthquakes investigated show a clear SNP, with three having none (including the 2002 Mw 7.8 Denali Fault earthquake, which initiated as a thrust subevent), and one with not enough stations to perform stacking.

  11. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  12. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  13. Patterns of Seismicity Characterizing the Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Yoder, M. R.; Holliday, J. R.; Schultz, K.; Wilson, J. M.; Donnellan, A.; Grant Ludwig, L.

    2015-12-01

    A number of methods to calculate probabilities of major earthquakes have recently been proposed. Most of these methods depend upon understanding patterns of small earthquakes preceding the large events. For example, the Natural Time Weibull method for earthquake forecasting (see www.openhazards.com) is based on the assumption that large earthquakes complete the Gutenberg-Richter scaling relation defined by the smallest earthquakes. Here we examine the scaling patterns of small earthquakes having magnitudes between cycles of large earthquakes. For example, in the region of California-Nevada between longitudes -130 to -114 degrees W, and latitudes 32 to 45 degrees North, we find 79 earthquakes having magnitudes M6 during the time interval 1933 - present, culminating with the most recent event, the M6.0 Napa, California earthquake of August 24, 2014. Thus we have 78 complete cycles of large earthquakes in this region. After compiling and stacking the smaller events occurring between the large events, we find a characteristic pattern of scaling for the smaller events. This pattern shows a scaling relation for the smallest earthquakes up to about 3earthquakes for 4.5scaling line are 0.85 for the entire interval 1933- present. Extrapolation of the small-magnitude scaling line indicates that the average cycle tends to be completed by a large earthquake having M~6.4. In addition, statistics indicate that departure of the successive earthquake cycles from their average pattern can be characterized by Coefficients of Variability and other measures. We discuss these ideas and apply them not only to California, but also to other seismically active areas in the world

  14. Irreversible thermodynamic model for accelerated moment release and atmospheric radon concentration prior to large earthquakes

    NASA Astrophysics Data System (ADS)

    Kawada, Y.; Nagahama, H.; Omori, Y.; Yasuoka, Y.; Shinogi, M.

    2006-12-01

    Accelerated moment release is often preceded by large earthquakes, and defined by rate of cumulative Benioff strain following power-law time-to-failure relation. This temporal seismicity pattern is investigated in terms of irreversible thermodynamics model. The model is regulated by the Helmholtz free energy defined by the macroscopic stress-strain relation and internal state variables (generalized coordinates). Damage and damage evolution are represented by the internal state variables. In the condition, huge number of the internal state variables has each specific relaxation time, while a set of the time evolution shows a temporal power-law behavior. The irreversible thermodynamic model reduces to a fiber-bundle model and experimentally-based constitutive law of rocks, and predicts the form of accelerated moment release. Based on the model, we can also discuss the increase in atmospheric radon concentration prior to the 1995 Kobe earthquake.

  15. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  16. The Validity and Reliability Work of the Scale That Determines the Level of the Trauma after the Earthquake

    ERIC Educational Resources Information Center

    Tanhan, Fuat; Kayri, Murat

    2013-01-01

    In this study, it was aimed to develop a short, comprehensible, easy, applicable, and appropriate for cultural characteristics scale that can be evaluated in mental traumas concerning earthquake. The universe of the research consisted of all individuals living under the effects of the earthquakes which occurred in Tabanli Village on 23.10.2011 and…

  17. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  18. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  19. Foreshock patterns preceding large earthquakes in the subduction zone of Chile

    NASA Astrophysics Data System (ADS)

    Minadakis, George; Papadopoulos, Gerassimos A.

    2016-04-01

    Some of the largest earthquakes in the globe occur in the subduction zone of Chile. Therefore, it is of particular interest to investigate foreshock patterns preceding such earthquakes. Foreshocks in Chile were recognized as early as 1960. In fact, the giant (Mw9.5) earthquake of 22 May 1960, which was the largest ever instrumentally recorded, was preceded by 45 foreshocks in a time period of 33h before the mainshock, while 250 aftershocks were recorded in a 33h time period after the mainshock. Four foreshocks were bigger than magnitude 7.0, including a magnitude 7.9 on May 21 that caused severe damage in the Concepcion area. More recently, Brodsky and Lay (2014) and Bedford et al. (2015) reported on foreshock activity before the 1 April 2014 large earthquake (Mw8.2). However, 3-D foreshock patterns in space, time and size were not studied in depth so far. Since such studies require for good seismic catalogues to be available, we have investigated 3-D foreshock patterns only before the recent, very large mainshocks occurring on 27 February 2010 (Mw 8.8), 1 April 2014 (Mw8.2) and 16 September 2015 (Mw8.4). Although our analysis does not depend on a priori definition of short-term foreshocks, our interest focuses in the short-term time frame, that is in the last 5-6 months before the mainshock. The analysis of the 2014 event showed an excellent foreshock sequence consisting by an early-weak foreshock stage lasting for about 1.8 months and by a main-strong precursory foreshock stage that was evolved in the last 18 days before the mainshock. During the strong foreshock period the seismicity concentrated around the mainshock epicenter in a critical area of about 65 km mainly along the trench domain to the south of the mainshock epicenter. At the same time, the activity rate increased dramatically, the b-value dropped and the mean magnitude increased significantly, while the level of seismic energy released also increased. In view of these highly significant seismicity

  20. Stress changes, focal mechanisms, and earthquake scaling laws for the 2000 dike at Miyakejima (Japan)

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Cesca, Simone; Aoki, Yosuke

    2015-06-01

    Faulting processes in volcanic areas result from a complex interaction of pressurized fluid-filled cracks and conduits with the host rock and local and regional tectonic setting. Often, volcanic seismicity is difficult to decipher in terms of the physical processes involved, and there is a need for models relating the mechanics of volcanic sources to observations. Here we use focal mechanism data of the energetic swarm induced by the 2000 dike intrusion at Miyakejima (Izu Archipelago, Japan), to study the relation between the 3-D dike-induced stresses and the characteristics of the seismicity. We perform a clustering analysis on the focal mechanism (FM) solutions and relate them to the dike stress field and to the scaling relationships of the earthquakes. We find that the strike and rake angles of the FMs are strongly correlated and cluster on bands in a strike-rake plot. We suggest that this is consistent with optimally oriented faults according to the expected pattern of Coulomb stress changes. We calculate the frequency-size distribution of the clustered sets finding that focal mechanisms with a large strike-slip component are consistent with the Gutenberg-Richter relation with a b value of about 1. Conversely, events with large normal faulting components deviate from the Gutenberg-Richter distribution with a marked roll-off on its right-hand tail, suggesting a lack of large-magnitude events (Mw > 5.5). This may result from the interplay of the limited thickness and lower rock strength of the layer of rock above the dike, where normal faulting is expected, and lower stress levels linked to the faulting style and low confining pressure.

  1. Impact of a Large San Andreas Fault Earthquake on Tall Buildings in Southern California

    NASA Astrophysics Data System (ADS)

    Krishnan, S.; Ji, C.; Komatitsch, D.; Tromp, J.

    2004-12-01

    In 1857, an earthquake of magnitude 7.9 occurred on the San Andreas fault, starting at Parkfield and rupturing in a southeasterly direction for more than 300~km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. The strong shaking in the basins due to this earthquake would have had a significant long-period content (2--8~s). If such motions were to happen today, they could have a serious impact on tall buildings in Southern California. In order to study the effects of large San Andreas fault earthquakes on tall buildings in Southern California, we use the finite source of the magnitude 7.9 2001 Denali fault earthquake in Alaska and map it onto the San Andreas fault with the rupture originating at Parkfield and proceeding southward over a distance of 290~km. Using the SPECFEM3D spectral element seismic wave propagation code, we simulate a Denali-like earthquake on the San Andreas fault and compute ground motions at sites located on a grid with a 2.5--5.0~km spacing in the greater Southern California region. We subsequently analyze 3D structural models of an existing tall steel building designed in 1984 as well as one designed according to the current building code (Uniform Building Code, 1997) subjected to the computed ground motion. We use a sophisticated nonlinear building analysis program, FRAME3D, that has the ability to simulate damage in buildings due to three-component ground motion. We summarize the performance of these structural models on contour maps of carefully selected structural performance indices. This study could benefit the city in laying out emergency response strategies in the event of an earthquake on the San Andreas fault, in undertaking appropriate retrofit measures for tall buildings, and in formulating zoning regulations for new construction. In addition, the study would provide risk data associated with existing and new construction to insurance companies, real estate developers, and

  2. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  3. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  4. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  5. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  6. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  7. Large scale structures in transitional pipe flow

    NASA Astrophysics Data System (ADS)

    Hellström, Leo; Ganapathisubramani, Bharathram; Smits, Alexander

    2015-11-01

    We present a dual-plane snapshot POD analysis of transitional pipe flow at a Reynolds number of 3440, based on the pipe diameter. The time-resolved high-speed PIV data were simultaneously acquired in two planes, a cross-stream plane (2D-3C) and a streamwise plane (2D-2C) on the pipe centerline. The two light sheets were orthogonally polarized, allowing particles situated in each plane to be viewed independently. In the snapshot POD analysis, the modal energy is based on the cross-stream plane, while the POD modes are calculated using the dual-plane data. We present results on the emergence and decay of the energetic large scale motions during transition to turbulence, and compare these motions to those observed in fully developed turbulent flow. Supported under ONR Grant N00014-13-1-0174 and ERC Grant No. 277472.

  8. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  9. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  10. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  11. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  12. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  14. Study of the Seismic Cycle of large Earthquakes in central Peru: Lima Region

    NASA Astrophysics Data System (ADS)

    Norabuena, E. O.; Quiroz, W.; Dixon, T. H.

    2009-12-01

    Since historical times, the Peruvian subduction zone has been source of large and destructive earthquakes. The more damaging one occurred on May 30 1970 offshore Peru’s northern city of Chimbote with a death toll of 70,000 people and several hundred US million dollars in property damage. More recently, three contiguous plate interface segments in southern Peru completed their seismic cycle generating the 1996 Nazca (Mw 7.1), the 2001 Atico-Arequipa (Mw 8.4) and the 2007 Pisco (Mw 7.9) earthquakes. GPS measurements obtained between 1994-2001 by IGP-CIW an University of Miami-RSMAS on the central Andes of Peru and Bolivia were used to estimate their coseismic displacements and late stage of interseismic strain accumulation. However, we focus our interest in central Peru-Lima region, which with its about 9’000,000 inhabitants is located over a locked plate interface that has not broken with magnitude Mw 8 earthquakes since May 1940, September 1966 and October 1974. We use a network of 11 GPS monuments to estimate the interseismic velocity field, infer spatial variations of interplate coupling and its relation with the background seismicity of the region.

  15. Estimating high frequency energy radiation of large earthquakes by image deconvolution back-projection

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2016-09-01

    High frequency energy radiation of large earthquakes is a key to evaluating shaking damage and is an important source characteristic for understanding rupture dynamics. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to retrieve high frequency energy radiation of seismic sources by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a point source. The array response that spreads energy both in space and time is evaluated by using data of a smaller reference earthquake that can be assumed to be a point source. The synthetic test of the method shows that the spatial and temporal resolution of the source is much better than that for the conventional back-projection method. We applied this new method to the 2001 Mw 7.8 Kunlun earthquake using data recorded by Hi-net in Japan. The new method resolves a sharp image of the high frequency energy radiation with a significant portion of supershear rupture.

  16. Backprojection of GNSS total-electron content signals for recent large earthquakes

    NASA Astrophysics Data System (ADS)

    Mikesell, T. D.; Rolland, L.; Haney, M. M.; Larmat, C. S.; Lee, R.

    2015-12-01

    It is well known that earthquakes and tsunamis couple energy into the dynamically fluid atmosphere. This energy can propagate up to the ionosphere where we can observe perturbations in the total-electron content (TEC) signals measured by global navigation space systems (GNSS). Recent emphasis has been placed on using these new observables to characterize earthquake and tsunami hazards from space, as well as for planetary exploration. Backprojection is an array-based imaging technique used in seismology to characterize the seismic source location, including complex energy release patterns from large earthquakes. Here we present TEC backprojection results from 3 recent earthquakes - 1) 2009 Samoa triggered doublet (Mw 8.1), 2) 2011 Van dip-slip event (Mw 7.1) and 3) 2012 Haida Gwaii strike-slip underthrust event (Mw 7.8). Each of these events presents new obstacles to overcome if backprojection is to be used routinely to monitor hazards from space. We will discuss these obstacles in detail and present approaches to overcome them. For instance, one problem arises from the fact that the observation point is non-stationary in time because the satellites are moving. Another problem stems from the relative geometry of the geomagnetic field and the incoming acoustic wave at the ionosphere. Finally, we present array-based methods to reduce artifacts in the backprojection images, e.g. array deconvolution, and we show that under favorable circumstances, this approach can be used to characterize motion at the Earth surface from space with high temporal and spatial resolution.

  17. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  18. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  19. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  20. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  1. A Record of the in-Lake and Upland Response to Large Earthquakes, Lake Quinault, Washington

    NASA Astrophysics Data System (ADS)

    Leithold, E. L.; Wegmann, K. W.; Bohnenstiehl, D. R.; Smith, S. A.

    2014-12-01

    Lake Quinault, located at the foot of the Olympic Mountains in western Washington, has served as a trap for sediment delivered from the steep, landslide-prone terrain of the Upper Quinault River catchment since its formation between 20,000 and 29,000 years ago. High resolution seismic reflection and sedimentological data reveal a record of both the in-lake and upland response to large earthquakes that have impacted the region during that period. The sedimentary infill of Lake Quinault is dominated by deposition during river floods, which delivered both abundant siliciclastic sediment and plant debris to the lake bottom. Minor episodes of soft-sediment deformation at the lake margins are recorded, and based on a preliminary age model, may be related to known earthquakes, including the well documented 1700 AD Cascadia megathrust event. By far the most dramatic event in the middle-late Holocene record of Lake Quinault, however, is the lateral spreading and degassing of sediments on its gentle western slopes during an event ca. 1300 years ago. Abundant gas chimneys are visible in seismic stratigraphic profiles from this part of the lake. Several of these gas chimneys extend from the limit of seismic penetration at 15-20 m depth in the lake bed upward to the lake bottom where they terminate at mounds with evidence for active venting. Most of the gas chimneys, however, end abruptly around 2.5 m beneath the lake floor and are overlain by parallel, continuous reflectors. Piston cores show soft-sediment deformation at this level, and abrupt shifts in density, magnetic susceptibility, flood layer thickness, particle size, color, and inorganic geochemistry. We interpret these shifts to mark the contact between sediments that experienced shaking and degassing during a strong earthquake event and overlying sediments that have not experienced comparable seismicity. The earthquake evidently strongly affected the Upper Quinault River catchment, causing increased sediment input to

  2. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  3. The California Post-Earthquake Information Clearinghouse: A Plan to Learn From the Next Large California Earthquake

    NASA Astrophysics Data System (ADS)

    Loyd, R.; Walter, S.; Fenton, J.; Tubbesing, S.; Greene, M.

    2008-12-01

    In the rush to remove debris after a damaging earthquake, perishable data related to a wide range of impacts on the physical, built and social environments can be lost. The California Post-Earthquake Information Clearinghouse is intended to prevent this data loss by supporting the earth scientists, engineers, and social and policy researchers who will conduct fieldwork in the affected areas in the hours and days following the earthquake to study these effects. First called for by Governor Ronald Reagan following the destructive M6.5 San Fernando earthquake in 1971, the concept of the Clearinghouse has since been incorporated into the response plans of the National Earthquake Hazard Reduction Program (USGS Circular 1242). This presentation is intended to acquaint scientists with the purpose, functions, and services of the Clearinghouse. Typically, the Clearinghouse is set up in the vicinity of the earthquake within 24 hours of the mainshock and is maintained for several days to several weeks. It provides a location where field researchers can assemble to share and discuss their observations, plan and coordinate subsequent field work, and communicate significant findings directly to the emergency responders and to the public through press conferences. As the immediate response effort winds down, the Clearinghouse will ensure that collected data are archived and made available through "lessons learned" reports and publications that follow significant earthquakes. Participants in the quarterly meetings of the Clearinghouse include representatives from state and federal agencies, universities, NGOs and other private groups. Overall management of the Clearinghouse is delegated to the agencies represented by the authors above.

  4. Vertical stress transfer after large subduction zone earthquakes: 2007 Tocopilla /North Chile case study

    NASA Astrophysics Data System (ADS)

    Eggert, S.; Sobiesiak, M.; Victor, P.

    2011-12-01

    Large interplate subduction zone earthquakes occur on fault planes within the seismogenic interface which, in the case of Northern Chile, usually start to break at the down dip end of the coupled interface, propagating towards the trench. Although the rupture is a horizontally oriented process, some vertical connectivity between the interface and the upper crust should be expected. We study two clusters of aftershock seismicity from the Mw 7.7, 2007, Tocopilla earthquake in Northern Chile Both clusters seem to align along vertical profiles in the upper crust above the main shock rupture plane. The first cluster has a rather dissipative character at the up-dip limit of the rupture plane in the off-shore area around the Peninsula of Mejillones. It developed in the early stage of the aftershock sequence. The second cluster lies above the pronounced aftershock sequence of a secondary large Mw 6.9 slab-push event on 16th of December 2007. This type of compressional event can occur after large thrust earthquakes. A comparison of the epicentral distribution of the crustal events belonging to the aftershock sequence suggests a possible relation to the Cerro Fortuna Fault in the Coastal Cordillera which is a subsidiary fault strand of the major Atacama Fault Zone. We compute the Coulomb stress change on the respective faults of both clusters analyzed to see where slip is promoted or inhibited due to the slip on the subduction interface. We then combine these results with the spatial and temporal aftershock distribution, focal mechanism solutions, b-value mappings and geological evidences to understand the process behind the ascending seismicity clusters and their relation to the main shock of the major Tocopilla event.

  5. On the generation of large amplitude spiky solitons by ultralow frequency earthquake emission in the Van Allen radiation belt

    SciTech Connect

    Mofiz, U. A.

    2006-08-15

    The parametric coupling between earthquake emitted circularly polarized electromagnetic radiation and ponderomotively driven ion-acoustic perturbations in the Van Allen radiation belt is considered. A cubic nonlinear Schroedinger equation for the modulated radiation envelope is derived, and then solved analytically. For ultralow frequency earthquake emissions large amplitude spiky supersonic bright solitons or subsonic dark solitons are found to be generated in the Van Allen radiation belt, detection of which can be a tool for the prediction of a massive earthquake may be followed later.

  6. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  7. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    NASA Astrophysics Data System (ADS)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  8. Observations of residual ULF signals from the Parkfield magnetometer surrounding large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bortnik, J.; Cutler, J. W.; Dunson, C.; Bleier, T.

    2005-12-01

    We use long-term (1999-2004) ULF data (<10 Hz) from a triaxial search-coil magnetometer located in Parkfield, California, to construct signal statistical quantities parametrized according to time of day, frequency range, coil orientation, season, and geomagnetic activity (Kp index). For each such parameter bin, we compute statistical quantities such as mean, variance, median and quartiles of the magnetic signal, and use these quantities as the baseline values from which signals are assumed to deviate. We then examine time periods surrounding those of large, nearby Earthquakes, and subtract the average and median signal values from the absolute signal values to obtain signal `residues'. Results show that this technique can be effective in reducing large background variations and thereby increasing the signal to noise ratio (SNR), allowing much lower amplitude signals of local origin to be detected. To further increase the SNR, we superpose a number of large earthquake periods and discuss the results in light of possible seismogenic ULF signal sources.

  9. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  10. Determining fault zone structure and examining earthquake early warning signals using large datasets of seismograms

    NASA Astrophysics Data System (ADS)

    Lewis, Michael Antony

    Seismic signals associated with near-fault waveforms are examined to determine fault zone structure and scaling of earthquake properties with event magnitude. The subsurface structure of faults is explored using fault zone head and/or trapped waves, while various signals from the early parts of seismograms are investigated to find out the extent to which they scale with magnitude. Fault zone trapped waves are observed in three arrays of instruments across segments of the San Jacinto fault. Similarly to previous fault zone trapped wave studies, the low velocity damage zones are found to be 100-200m wide and extend to a depth of ˜3-5km. Observation and modeling indicate that the damage zone was asymmetric around the fault trace. A similar sense of damage asymmetry was observed using detailed geological mapping by Dor et al. (2006) nearby on the San Jacinto fault at Anza. Travel time analysis and arrival time inversions of fault zone head waves were used to produce high resolution images of the fault structure of the San Andreas fault south of Hollister. The contrast of P wave velocities across the fault was found to be ˜50% in the shallow section, lowering to 10-20% below 3 km, with the southwest side having faster velocities. Inversions making use of different subsets of stations suggest that a low velocity damage zone also exists in this area and that it is more prominent on the faster velocity side of the fault. The patterns of damage from these studies of fault zone head waves and trapped waves are consistent (Ben-Zion and Shi, 2005) with the theoretical prediction that earthquake ruptures on these fault sections have statistically-preferred propagation directions. The early parts of P waveforms are examined for signals that have previously been proposed to scale with the final event magnitude. Data from Turkey and a deep South African gold mine show that scaling is present in signals related to the maximum displacement amplitude and frequency content. The high

  11. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes.

    PubMed

    Corral, Alvaro

    2004-03-12

    Analyzing diverse seismic catalogs, we have determined that the probability densities of the earthquake recurrence times for different spatial areas and magnitude ranges can be described by a unique universal distribution if the time is rescaled with the rate of seismic occurrence, which therefore fully governs seismicity. The shape of the distribution shows the existence of clustering beyond the duration of aftershock bursts, and scaling reveals the self-similarity of the clustering structure in the space-time-magnitude domain. This holds from worldwide to local scales, for quite different tectonic environments and for all the magnitude ranges considered.

  12. I. Rupture properties of large subduction earthquakes. II. Broadband upper mantle structure of western North America

    NASA Astrophysics Data System (ADS)

    Melbourne, Timothy Ian

    This thesis contains two studies, one of which employs geodetic data bearing on large subduction earthquakes to infer complexity of rupture duration, and the other is a high frequency seismological study of the upper mantle discontinuity structure under western North America and the East Pacific Rise. In the first part, we present Global Positioning System and tide gauge data which record the co-seismic deformation which accompanied the 1995 Mw8.0 Jalisco event offshore central Mexico, the 1994 Mw7.5 Sanriku event offshore Northern Honshu, Japan, and the 1995 Mw8.1 Antofagasta earthquake offshore Northern Chile. In two of the three cases we find that the mainshocks were followed by significant amounts of rapid, post-seismic deformation which is best and most easily explained by continued slip near the co-seismic rupture patch. This is the first documented case of rapid slip migration following a large earthquake, and is pertinent to earthquake prediction based on precursory deformation. As the three GPS data sets represent the best observations of large subduction earthquakes to date and two of them show significant amounts of aseismic energy release, they strongly suggest silent faulting may be common in certain types of subduction zones. This, in turn, bears on estimates of global moment release, seismic coupling, and our understanding of the natural hazards associated with convergent margins. The second part of this dissertation utilizes high frequency body waves to infer the upper mantle structure of western North America and the East Pacific Rise. An uncharacteristically large Mw5.9 earthquake located in Western Texas provided a vivid topside reflection off the 410 Km velocity discontinuity ("410"), which we model to infer the fine details of this structure. We find that, contrary to conventional wisdom, the 410 is not sharp, and our results help reconcile seismic observations of 410 structure with laboratory predictions. By analyzing differences between our

  13. Forecasting earthquake-induced landslides at the territorial scale by means of PBEE approaches

    NASA Astrophysics Data System (ADS)

    Berni, N.; Fanelli, G.; Ponziani, F.; Salciarini, D.; Stelluti, M.; Tamagnini, C.

    2012-04-01

    Models for predicting earthquake-induced landslide susceptibility on a regional scale are the main tools used by the Civil Protection Agencies to issue warning alarms after seismic events and to evaluate possible seismic hazard conditions for different earthquake scenarios. We present a model for susceptibility analysis based on a deterministic approach that subdivides the study area in a finite number of cells, assumes for each cell a simplified infinite slope model and considers the earthquake shaking as the landslide triggering factor. In this case, the stability conditions of the slopes are related both to the slope features (in terms of mechanical properties, geometrical and topographical settings and pore pressure regime) and to the earthquake characteristics (in terms of intensity, duration and frequency). Therefore, for a territorial analysis, the proposed method determines the limit conditions of the slope, given the seismic input, soil strength parameters, slope and depth of slip surface, and groundwater conditions for every cell in the study area. The procedure is ideally suited for the implementation on a GIS platform, in which the relevant information are stored for each cell. The seismic response of the slopes is analyzed by means of the Newmark's permanent displacement method. In Newmark's approach, seismic slope stability is measured in terms of the ratio of accumulated permanent displacement during the earthquake and the maximum allowable one, depending - in principle - on the definition of tolerable damage level. The computed permanent displacement depends critically on the actual slope stability conditions, quantified by the critical acceleration, i.e., the seismic acceleration bringing the slope to a state of (instantaneous) limit equilibrium. This methodology is applied in a study of shallow earthquake-induced landslides in central Italy. The triggering seismic input is defined in terms of synthetic accelerograms, constructed from the response

  14. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006

  15. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  16. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  17. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  18. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  19. Simulations of Large Scale Structures in Cosmology

    NASA Astrophysics Data System (ADS)

    Liao, Shihong

    Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.

  20. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  1. Large scale molecular simulations of nanotoxicity.

    PubMed

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.

  2. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  3. The Richter scale: its development and use for determining earthquake source parameters

    USGS Publications Warehouse

    Boore, D.M.

    1989-01-01

    The ML scale, introduced by Richter in 1935, is the antecedent of every magnitude scale in use today. The scale is defined such that a magnitude-3 earthquake recorded on a Wood-Anderson torsion seismometer at a distance of 100 km would write a record with a peak excursion of 1 mm. To be useful, some means are needed to correct recordings to the standard distance of 100 km. Richter provides a table of correction values, which he terms -log Ao, the latest of which is contained in his 1958 textbook. A new analysis of over 9000 readings from almost 1000 earthquakes in the southern California region was recently completed to redetermine the -log Ao values. Although some systematic differences were found between this analysis and Richter's values (such that using Richter's values would lead to underand overestimates of ML at distances less than 40 km and greater than 200 km, respectively), the accuracy of his values is remarkable in view of the small number of data used in their determination. Richter's corrections for the distance attenuation of the peak amplitudes on Wood-Anderson seismographs apply only to the southern California region, of course, and should not be used in other areas without first checking to make sure that they are applicable. Often in the past this has not been done, but recently a number of papers have been published determining the corrections for other areas. If there are significant differences in the attenuation within 100 km between regions, then the definition of the magnitude at 100 km could lead to difficulty in comparing the sizes of earthquakes in various parts of the world. To alleviate this, it is proposed that the scale be defined such that a magnitude 3 corresponds to 10 mm of motion at 17 km. This is consistent both with Richter's definition of ML at 100 km and with the newly determined distance corrections in the southern California region. Aside from the obvious (and original) use as a means of cataloguing earthquakes according

  4. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  5. Normalized rupture potential for small and large earthquakes along the Pacific Plate off Japan

    NASA Astrophysics Data System (ADS)

    Tormann, Thessa; Wiemer, Stefan; Enescu, Bogdan; Woessner, Jochen

    2016-07-01

    We combine temporal variability in local seismic activity rates and size distributions to estimate the evolution of a Gutenberg-Richter-based metric, the normalized rupture potential (NRP), comparing differences between smaller and larger earthquakes. For the Pacific Plate off Japan, we study both complex spatial patterns and how they evolve over the last 18 years, and more detailed temporal characteristics in a simplified spatial selection, i.e., inside and outside the high-slip zone of the 2011 M9 Tohoku earthquake. We resolve significant changes, in particular an immediate NRP increase for large events prior to the Tohoku event in the subsequent high-slip patch, a very rapid decrease inside this high-stress-release area coupled with a lasting increase of NRP in the immediate surroundings. Even in the center of the Tohoku rupture, the NRP for large magnitudes has not dropped below the 12 year average and is not significantly different from conditions a decade before the M9 event.

  6. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  7. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  8. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  9. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  10. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  11. Estimation of source parameters and scaling relations for moderate size earthquakes in North-West Himalaya

    NASA Astrophysics Data System (ADS)

    Kumar, Vikas; Kumar, Dinesh; Chopra, Sumer

    2016-10-01

    The scaling relation and self similarity of earthquake process have been investigated by estimating the source parameters of 34 moderate size earthquakes (mb 3.4-5.8) occurred in the NW Himalaya. The spectral analysis of body waves of 217 accelerograms recorded at 48 sites have been carried out using in the present analysis. The Brune's ω-2 model has been adopted for this purpose. The average ratio of the P-wave corner frequency, fc(P), to the S-wave corner frequency, fc(S), has been found to be 1.39 with fc(P) > fc(S) for 90% of the events analyzed here. This implies the shift in the corner frequency in agreement with many other similar studies done for different regions. The static stress drop values for all the events analyzed here lie in the range 10-100 bars average stress drop value of the order of 43 ± 19 bars for the region. This suggests the likely estimate of the dynamic stress drop, which is 2-3 times the static stress drop, is in the range of about 80-120 bars. This suggests the relatively high seismic hazard in the NW Himalaya as high frequency strong ground motions are governed by the stress drop. The estimated values of stress drop do not show significant variation with seismic moment for the range 5 × 1014-2 × 1017 N m. This observation along with the cube root scaling of corner frequencies suggests the self similarity of the moderate size earthquakes in the region. The scaling relation between seismic moment and corner frequency Mo fc3 = 3.47 ×1016Nm /s3 estimated in the present study can be utilized to estimate the source dimension given the seismic moment of the earthquake for the hazard assessment. The present study puts the constrains on the important parameters stress drop and source dimension required for the synthesis of strong ground motion from the future expected earthquakes in the region. Therefore, the present study is useful for the seismic hazard and risk related studies for NW Himalaya.

  12. Characterizing Mega-Earthquake Related Tsunami on Subduction Zones without Large Historical Events

    NASA Astrophysics Data System (ADS)

    Williams, C. R.; Lee, R.; Astill, S.; Farahani, R.; Wilson, P. S.; Mohammed, F.

    2014-12-01

    Due to recent large tsunami events (e.g., Chile 2010 and Japan 2011), the insurance industry is very aware of the importance of managing its exposure to tsunami risk. There are currently few tools available to help establish policies for managing and pricing tsunami risk globally. As a starting point and to help address this issue, Risk Management Solutions Inc. (RMS) is developing a global suite of tsunami inundation footprints. This dataset will include both representations of historical events as well as a series of M9 scenarios on subductions zones that have not historical generated mega earthquakes. The latter set is included to address concerns about the completeness of the historical record for mega earthquakes. This concern stems from the fact that the Tohoku Japan earthquake was considerably larger than had been observed in the historical record. Characterizing the source and rupture pattern for the subduction zones without historical events is a poorly constrained process. In many case, the subduction zones can be segmented based on changes in the characteristics of the subducting slab or major ridge systems. For this project, the unit sources from the NOAA propagation database are utilized to leverage the basin wide modeling included in this dataset. The length of the rupture is characterized based on subduction zone segmentation and the slip per unit source can be determined based on the event magnitude (i.e., M9) and moment balancing. As these events have not occurred historically, there is little to constrain the slip distribution. Sensitivity tests on the potential rupture pattern have been undertaken comparing uniform slip to higher shallow slip and tapered slip models. Subduction zones examined include the Makran Trench, the Lesser Antilles and the Hikurangi Trench. The ultimate goal is to create a series of tsunami footprints to help insurers understand their exposures at risk to tsunami inundation around the world.

  13. Three time scales of earthquake clustering inferred from in-situ 36Cl cosmogenic dating on the Velino-Magnola fault (Central Italy)

    NASA Astrophysics Data System (ADS)

    Schlagenhauf, A.; Manighetti, I.; Benedetti, L.; Gaudemer, Y.; Malavieille, J.; Finkel, R. C.; Pou, K.

    2010-12-01

    Using in-situ 36Cl cosmogenic exposure dating, we determine the earthquake slip release pattern over the last ~ 14 kyrs along one of the major active normal fault systems in Central Italy. The ~ 40 km-long Velino-Magnola fault (VMF) is located ~ 20 km SW from the epicenter of the devastating April 2009 l’Aquila earthquake. We sampled the VMF at five well-separated sites along its length, and modeled the 36Cl concentrations measured in the 400 samples (Schlagenhauf et al. 2010). We find that the fault has broken in large earthquakes which clustered at three different time scales -monthly, centennial and millennial. More precisely, the fault sustained phases of intense seismic activity, separated by ~ 3 kyr-long periods of relative quiescence. The phases of strong activity lasted 3-4 kyrs (millennial scale) and included 3-4 ‘rupture events’ that repeated every 0.5-1 kyr (centennial scale). Each of these ‘rupture events’ was likely a sequence of a few large earthquakes cascading in a very short time, a few months at most (monthly scale), to eventually break the entire VMF. Each earthquake apparently broke a section of the fault of 10-20 km and produced maximum surface displacements of 2-3.5 meters. The fault seems to enter a phase of intense activity when the accumulated strain reaches a specific threshold. Based on this observation, the Velino-Magnola fault seems presently in a stage of relative quiescence. Yet, it may soon re-enter a phase of paroxysmal seismic activity. If its forthcoming earthquakes are similar to those we have documented, several may occur in cascade over a short time, each with a magnitude up to 6.5-6.9. Seismic hazard is thus high in the Lazio-Abruzzo region, especially in the Fucino area. References: Schlagenhauf A., Y. Gaudemer, L. Benedetti, I. Manighetti, L. Palumbo, I. Schimmelpfennig, R. Finkel, and K. Pou (2010). Using in-situ Chlorine-36 cosmonuclide to recover past earthquake histories on limestone normal fault scarps: A

  14. Seismic imaging of structural heterogeneity in Earth's mantle: evidence for large-scale mantle flow.

    PubMed

    Ritsema, J; Van Heijst, H J

    2000-01-01

    Systematic analyses of earthquake-generated seismic waves have resulted in models of three-dimensional elastic wavespeed structure in Earth's mantle. This paper describes the development and the dominant characteristics of one of the most recently developed models. This model is based on seismic wave travel times and wave shapes from over 100,000 ground motion recordings of earthquakes that occurred between 1980 and 1998. It shows signatures of plate tectonic processes to a depth of about 1,200 km in the mantle, and it demonstrates the presence of large-scale structure throughout the lower 2,000 km of the mantle. Seismological analyses make it increasingly more convincing that geologic processes shaping Earth's surface are intimately linked to physical processes in the deep mantle.

  15. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly

  16. Modeling Recent Large Earthquakes Using the 3-D Global Wave Field

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, V.; Kanamori, H.; Tromp, J.

    2003-04-01

    We use the spectral-element method (SEM) to accurately compute waveforms at periods of 40 s and longer for three recent large earthquakes using 3D Earth models and finite source models. The M_w~7.6, Jan~26, 2001, Bhuj, India event had a small rupture area and is well modeled at long periods with a point source. We use this event as a calibration event to investigate the effects of 3-D Earth models on the waveforms. The M_w~7.9, Nov~11, 2001, Kunlun, China, event exhibits a large directivity (an asymmetry in the radiation pattern) even at periods longer than 200~s. We used the source time function determined by Kikuchi and Yamanaka (2001) and the overall pattern of slip distribution determined by Lin et al. to guide the wave-form modeling. The large directivity is consistent with a long fault, at least 300 km, and an average rupture speed of 3±0.3~km/s. The directivity at long periods is not sensitive to variations in the rupture speed along strike as long as the average rupture speed is constant. Thus, local variations in rupture speed cannot be ruled out. The rupture speed is a key parameter for estimating the fracture energy of earthquakes. The M_w~8.1, March~25, 1998, event near the Balleny Islands on the Antarctic Plate exhibits large directivity in long period surface waves, similar to the Kunlun event. Many slip models have been obtained from body waves for this earthquake (Kuge et al. (1999), Nettles et al. (1999), Antolik et al. (2000), Henry et al. (2000) and Tsuboi et al. (2000)). We used the slip model from Henry et al. to compute SEM waveforms for this event. The synthetic waveforms show a good fit to the data at periods from 40-200~s, but the amplitude and directivity at longer periods are significantly smaller than observed. Henry et al. suggest that this event comprised two subevents with one triggering the other at a distance of 100 km. To explain the observed directivity however, a significant amount of slip is required between the two subevents

  17. The most recent large earthquake on the Rodgers Creek fault, San Francisco bay area

    USGS Publications Warehouse

    Hecker, S.; Pantosti, D.; Schwartz, D.P.; Hamilton, J.C.; Reidy, L.M.; Powers, T.J.

    2005-01-01

    The Rodgers Creek fault (RCF) is a principal component of the San Andreas fault system north of San Francisco. No evidence appears in the historical record of a large earthquake on the RCF, implying that the most recent earthquake (MRE) occurred before 1824, when a Franciscan mission was built near the fault at Sonoma, and probably before 1776, when a mission and presidio were built in San Francisco. The first appearance of nonnative pollen in the stratigraphic record at the Triangle G Ranch study site on the south-central reach of the RCF confirms that the MRE occurred before local settlement and the beginning of livestock grazing. Chronological modeling of earthquake age using radiocarbon-dated charcoal from near the top of a faulted alluvial sequence at the site indicates that the MRE occurred no earlier than A.D. 1690 and most likely occurred after A.D. 1715. With these age constraints, we know that the elapsed time since the MRE on the RCF is more than 181 years and less than 315 years and is probably between 229 and 290 years. This elapsed time is similar to published recurrence-interval estimates of 131 to 370 years (preferred value of 230 years) and 136 to 345 years (mean of 205 years), calculated from geologic data and a regional earthquake model, respectively. Importantly, then, the elapsed time may have reached or exceeded the average recurrence time for the fault. The age of the MRE on the RCF is similar to the age of prehistoric surface rupture on the northern and southern sections of the Hayward fault to the south. This suggests possible rupture scenarios that involve simultaneous rupture of the Rodgers Creek and Hayward faults. A buried channel is offset 2.2 (+ 1.2, - 0.8) m along one side of a pressure ridge at the Triangle G Ranch site. This provides a minimum estimate of right-lateral slip during the MRE at this location. Total slip at the site may be similar to, but is probably greater than, the 2 (+ 0.3, - 0.2) m measured previously at the

  18. Large Historical Tsunamigenic Earthquakes in Italy: The Neglected Tsunami Research Point of View

    NASA Astrophysics Data System (ADS)

    Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.

    2015-12-01

    It is known that tsunamis are rather rare events, especially when compared to earthquakes, and the Italian coasts are no exception. Nonetheless, a striking evidence is that 6 out of 10 earthquakes occurred in the last thousand years in Italy, and having equivalent moment magnitude equal or larger than 7 where accompanied by destructive or heavily damaging tsunamis. If we extend the lower limit of the equivalent moment magnitude down to 6.5 the percentage decreases (around 40%), but is still significant. Famous events like those occurred on 30 July 1627 in Gargano, on 11 January 1693 in eastern Sicily, and on 28 December 1908 in the Messina Straits are part of this list: they were all characterized by maximum run-ups of several meters (13 m for the 1908 tsunami), significant maximum inundation distances, and large (although not precisely quantifiable) numbers of victims. Further evidences provided in the last decade by paleo-tsunami deposit analyses help to better characterize the tsunami impact and confirm that none of the cited events can be reduced to local or secondary effects. Proper analysis and simulation of available tsunami data would then appear as an obvious part of the correct definition of the sources responsible for the largest Italian tsunamigenic earthquakes, in a process in which different datasets analyzed by different disciplines must be reconciled rather than put into contrast with each other. Unfortunately, macroseismic, seismic and geological/geomorphological observations and data typically are assigned much heavier weights, and in-land faults are often assigned larger credit than the offshore ones, even when evidence is provided by tsunami simulations that they are not at all capable of justifying the observed tsunami effects. Tsunami generation is imputed a-priori to only supposed, and sometimes even non-existing, submarine landslides. We try to summarize the tsunami research point of view on the largest Italian historical tsunamigenic

  19. Comparative analysis of the tsunami and large earthquake occurrence in the Pacific.

    NASA Astrophysics Data System (ADS)

    Levin, Boris; Sasorova, Elena

    2014-05-01

    The data about tsunami events from 1900 to 2012 with M>=7.5, tsunami intensity I>=1, which have tectonic nature and the validity level V=4 were extracted from two tsunami databases: the Expert Tsunami Data Base for the Pacific (ETDB/PAC), Novosibirsk, Russia (http://tsun.sscc.ru/htdbpac), and Tsunami Event and Runup Database at NOAA www.tsunami.noaa.gov/observations_data. Total number of chosen events was equal to 108. The temporal distributions of the tsunamigenic earthquakes (TEQ) epicenters and the distributions of the energy released by the TEQ were calculated separately for the entire Pacific region, and for the Southern hemisphere (SH), and for the Northern hemisphere (NH) as well as for a number of sub-regions of the Pacific: Japan, Central America, South America, Alaska, the Aleutian arc and the Kuril-Kamchatka arc. Next, we use two subsets of the worldwide NEIC earthquake (EQ) catalog (USGS/NEIC from 1973 up to 2012 and Significant Worldwide Earthquakes (2150 B.C. - 1994 A.D.)). Total number of chosen events was equal to 615. The preliminary standardization of magnitudes was performed. The temporal EQ distributions were calculated separately for the entire Pacific region, and for the SH, and for the NH and for eighteen latitudinal belts: 90°-80°N, 80°-70°N, 70°-60°N, 60°-50°N and so on (the size of each belt is equal to 10°). In both cases (for the seismic events and for the TEQ), the entire observation period was divided into several five-year intervals. We calculated also two-dimensional spatio-temporal distributions of the EQ (TEQ) density and the released energy density. The comparative analysis of the obtained distributions (for the large EQ and for the TEQ) was carried out. It was found that the latitudinal distributions of the energy density for the grate EQ and for the TEQ are completely different. The analysis showed the periodic changing of the seismic activity in different time intervals. According to our estimations the periodic

  20. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  1. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  2. Large-Scale Processing of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Finn, John; Sridhar, K. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1998-01-01

    Scale-up difficulties and high energy costs are two of the more important factors that limit the availability of various types of nanotube carbon. While several approaches are known for producing nanotube carbon, the high-powered reactors typically produce nanotubes at rates measured in only grams per hour and operate at temperatures in excess of 1000 C. These scale-up and energy challenges must be overcome before nanotube carbon can become practical for high-consumption structural and mechanical applications. This presentation examines the issues associated with using various nanotube production methods at larger scales, and discusses research being performed at NASA Ames Research Center on carbon nanotube reactor technology.

  3. The Mini-IPIP Scale: psychometric features and relations with PTSD symptoms of Chinese earthquake survivors.

    PubMed

    Li, Zhongquan; Sang, Zhiqin; Wang, Li; Shi, Zhanbiao

    2012-10-01

    The present purpose was to validate the Mini-IPIP scale, a short measure of the five-factor model personality traits, with a sample of Chinese earthquake survivors. A total of 1,563 participants, ages 16 to 85 years, completed the Mini-IPIP scale and a measure of posttraumatic stress disorder (PTSD) symptoms. Confirmatory factor analysis supported the five-factor structure of the Mini-IPIP with adequate values of various fit indices. This scale also showed values of internal consistency, Cronbach's alphas ranged from .79 to .84, and McDonald's omega ranged from .73 to .82 for scores on each subscale. Moreover, the five personality traits measured by the Mini-IPIP and those assessed by other big five measures had comparable patterns of relations with PTSD symptoms. Findings indicated that the Mini-IPIP is an adequate short-form of the Big-Five factors of personality, which is applicable with natural disaster survivors.

  4. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  5. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  6. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  7. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  8. Comparison of co-seismic and post-seismic slip of large earthquakes in southern Peru and northern Chile

    NASA Astrophysics Data System (ADS)

    Pritchard, M. E.; Ji, C.; Simons, M.; Klotz, J.

    2003-12-01

    We use InSAR, GPS, and seismic data to constrain the location of co-seismic and post-seismic slip on the subduction interface in southern Peru and northern Chile. We focus on the July 30, 1995, {M}w~8.1 and the January 30, 1998, {M}w~7.1 northern Chile earthquakes as well as the November 12, 1996, {M}w~7.7 and June 23, 2001, {M}w~8.4 southern Peru earthquakes. For all four earthquakes, we invert body-wave seismic waveforms and geodetic data (InSAR for all earthquakes and GPS where available) both jointly and separately for co-seismic slip. In northern Chile, we constrain the temporal and spatial evolution of post-seismic after-slip using about 30 interferograms spanning 1995-2000 and GPS data from the German SAGA array (spanning 1995-1997, including vertical displacements). In southern Peru, we use InSAR data, and GPS data from the Arequipa station to constrain post-seismic after-slip. Comparison of these events provides insight into the rupture process of large subduction zone earthquakes and the mechanisms of post-seismic after-slip. The plate tectonic setting for all the earthquakes is similar (convergence rate, plate age, etc.), but the amount of post-seismic after-slip is different. There is significant slip after the 2001 earthquake (equivalent to approximately 20% of the co-seismic moment), but compared to other recent subduction zone earthquakes, there is little slip following the other three events. The different amounts of post-seismic slip are not obviously related to differences in the dynamic ruptures of each event, but might be related to along-strike variations in material properties (like sediment thickness).

  9. Earthquake source scaling and self-similarity estimation from stacking P and S spectra

    NASA Astrophysics Data System (ADS)

    Prieto, GermáN. A.; Shearer, Peter M.; Vernon, Frank L.; Kilb, Debi

    2004-08-01

    We study the scaling relationships of source parameters and the self-similarity of earthquake spectra by analyzing a cluster of over 400 small earthquakes (ML = 0.5 to 3.4) recorded by the Anza seismic network in southern California. We compute P, S, and preevent noise spectra from each seismogram using a multitaper technique and approximate source and receiver terms by iteratively stacking the spectra. To estimate scaling relationships, we average the spectra in size bins based on their relative moment. We correct for attenuation by using the smallest moment bin as an empirical Green's function (EGF) for the stacked spectra in the larger moment bins. The shapes of the log spectra agree within their estimated uncertainties after shifting along the ω-3 line expected for self-similarity of the source spectra. We also estimate corner frequencies and radiated energy from the relative source spectra using a simple source model. The ratio between radiated seismic energy and seismic moment (proportional to apparent stress) is nearly constant with increasing moment over the magnitude range of our EGF-corrected data (ML = 1.8 to 3.4). Corner frequencies vary inversely as the cube root of moment, as expected from the observed self-similarity in the spectra. The ratio between P and S corner frequencies is observed to be 1.6 ± 0.2. We obtain values for absolute moment and energy by calibrating our results to local magnitudes for these earthquakes. This yields a S to P energy ratio of 9 ± 1.5 and a value of apparent stress of about 1 MPa.

  10. Real or virtual large-scale structure?

    PubMed Central

    Evrard, August E.

    1999-01-01

    Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own. PMID:10200243

  11. States of local stresses in the Sea of Marmara through the analysis of large numbers of small earthquakes

    NASA Astrophysics Data System (ADS)

    Korkusuz Öztürk, Yasemin; Meral Özel, Nurcan; Özbakir, Ali Değer

    2015-12-01

    We invert the present day states of stresses for five apparent earthquake clusters in the Northern branch of the North Anatolian Fault in the Sea of Marmara. As the center of the Sea of Marmara is prone to a devastating earthquake within a seismic gap between these selected clusters, sensitive analyses of the understanding of the stress and strain characteristics of the region are all-important. We use high quality P and S phases, and P-wave first motion polarities from 398 earthquakes with ML ≥ 1.5 using at least 10 P-wave first motion polarities (FMPs), and a maximum of 1 inconsistent station, obtained from a total of 105 seismic stations, including 5 continuous OBSs. We report here on large numbers of simultaneously determined individual fault plane solutions (FPSs), and orientations of principal stress axes, which previously have not been determined with any confidence from the basins of the Sea of Marmara and prominent fault branches. We find NE-SW trending transtensional stress structures, predominantly in the earthquake clusters of the Eastern Tekirdağ Basin, Eastern Çınarcık Basin, Yalova and Gemlik areas. We infer that a dextral strike-slip deformation exist in the Eastern Ganos Offshore cluster. Furthermore, we analyze FPSs of four ML ≥ 4.0 earthquakes, occurred in seismically quiet regions after 1999 Izmit earthquake. Stress tensor solutions from a cluster of small events that we have obtained, correlate with FPSs of these moderate size events as a demonstration of the effectiveness of the small earthquakes in the derivation of states of local stresses. Consequently, our analyses of seismicity and large numbers of FPSs using the densest seismic network of Turkey contribute to better understanding of the present states of the stresses and seismotectonics of the Sea of Marmara.

  12. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  13. Numerical Investigation of Earthquake Nucleation on a Laboratory-Scale Heterogeneous Fault with Rate-and-State Friction

    NASA Astrophysics Data System (ADS)

    Higgins, N.; Lapusta, N.

    2014-12-01

    Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have

  14. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  15. Light propagation and large-scale inhomogeneities

    SciTech Connect

    Brouzakis, Nikolaos; Tetradis, Nikolaos; Tzavara, Eleftheria E-mail: ntetrad@phys.uoa.gr

    2008-04-15

    We consider the effect on the propagation of light of inhomogeneities with sizes of order 10 Mpc or larger. The Universe is approximated through a variation of the Swiss-cheese model. The spherical inhomogeneities are void-like, with central underdensities surrounded by compensating overdense shells. We study the propagation of light in this background, assuming that the source and the observer occupy random positions, so that each beam travels through several inhomogeneities at random angles. The distribution of luminosity distances for sources with the same redshift is asymmetric, with a peak at a value larger than the average one. The width of the distribution and the location of the maximum increase with increasing redshift and length scale of the inhomogeneities. We compute the induced dispersion and bias of cosmological parameters derived from the supernova data. They are too small to explain the perceived acceleration without dark energy, even when the length scale of the inhomogeneities is comparable to the horizon distance. Moreover, the dispersion and bias induced by gravitational lensing at the scales of galaxies or clusters of galaxies are larger by at least an order of magnitude.

  16. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  17. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  18. Acoustic Emission Patterns and the Transition to Ductility in Sub-Micron Scale Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Ghaffari, H.; Xia, K.; Young, R.

    2013-12-01

    We report observation of a transition from the brittle to ductile regime in precursor events from different rock materials (Granite, Sandstone, Basalt, and Gypsum) and Polymers (PMMA, PTFE and CR-39). Acoustic emission patterns associated with sub-micron scale laboratory earthquakes are mapped into network parameter spaces (functional damage networks). The sub-classes hold nearly constant timescales, indicating dependency of the sub-phases on the mechanism governing the previous evolutionary phase, i.e., deformation and failure of asperities. Based on our findings, we propose that the signature of the non-linear elastic zone around a crack tip is mapped into the details of the evolutionary phases, supporting the formation of a strongly weak zone in the vicinity of crack tips. Moreover, we recognize sub-micron to micron ruptures with signatures of 'stiffening' in the deformation phase of acoustic-waveforms. We propose that the latter rupture fronts carry critical rupture extensions, including possible dislocations faster than the shear wave speed. Using 'template super-shear waveforms' and their network characteristics, we show that the acoustic emission signals are possible super-shear or intersonic events. Ref. [1] Ghaffari, H. O., and R. P. Young. "Acoustic-Friction Networks and the Evolution of Precursor Rupture Fronts in Laboratory Earthquakes." Nature Scientific reports 3 (2013). [2] Xia, Kaiwen, Ares J. Rosakis, and Hiroo Kanamori. "Laboratory earthquakes: The sub-Rayleigh-to-supershear rupture transition." Science 303.5665 (2004): 1859-1861. [3] Mello, M., et al. "Identifying the unique ground motion signatures of supershear earthquakes: Theory and experiments." Tectonophysics 493.3 (2010): 297-326. [4] Gumbsch, Peter, and Huajian Gao. "Dislocations faster than the speed of sound." Science 283.5404 (1999): 965-968. [5] Livne, Ariel, et al. "The near-tip fields of fast cracks." Science 327.5971 (2010): 1359-1363. [6] Rycroft, Chris H., and Eran Bouchbinder

  19. Practical guidelines to select and scale earthquake records for nonlinear response history analysis of structures

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2010-01-01

    Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  20. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  1. GPS for large-scale aerotriangulation

    NASA Astrophysics Data System (ADS)

    Rogowksi, Jerzy B.

    The application of GPS (Global Positioning System) measurements to photogrammetry is presented. The technology of establishment of a GPS network for aerotriangulation as a base for mapping at scales from 1:1000 has been worked out at the Institute of Geodesy and Geodetical Astronomy of the Warsaw University of Technology. This method consists of the design, measurement, and adjustment of this special network. The results of several pilot projects confirm the possibility of improving the aerotriangulation accuracy. A few-centimeter accuracy has been achieved.

  2. Large scale properties of the Webgraph

    NASA Astrophysics Data System (ADS)

    Donato, D.; Laura, L.; Leonardi, S.; Millozzi, S.

    2004-03-01

    In this paper we present an experimental study of the properties of web graphs. We study a large crawl from 2001 of 200M pages and about 1.4 billion edges made available by the WebBase project at Stanford[CITE]. We report our experimental findings on the topological properties of such graphs, such as the number of bipartite cores and the distribution of degree, PageRank values and strongly connected components.

  3. Infrasonic observations of large scale HE events

    SciTech Connect

    Whitaker, R.W.; Mutschlecner, J.P.; Davidson, M.B.; Noel, S.D.

    1990-01-01

    The Los Alamos Infrasound Program has been operating since about mid-1982, making routine measurements of low frequency atmospheric acoustic propagation. Generally, we work between 0.1 Hz to 10 Hz; however, much of our work is concerned with the narrower range of 0.5 to 5.0 Hz. Two permanent stations, St. George, UT, and Los Alamos, NM, have been operational since 1983, collecting data 24 hours a day. This discussion will concentrate on measurements of large, high explosive (HE) events at ranges of 250 km to 5330 km. Because the equipment is well suited for mobile deployments, it can easily establish temporary observing sites for special events. The measurements in this report are from our permanent sites, as well as from various temporary sites. In this short report will not give detailed data from all sites for all events, but rather will present a few observations that are typical of the full data set. The Defense Nuclear Agency sponsors these large explosive tests as part of their program to study airblast effects. A wide variety of experiments are fielded near the explosive by numerous Department of Defense (DOD) services and agencies. This measurement program is independent of this work; use is made of these tests as energetic known sources, which can be measured at large distances. Ammonium nitrate and fuel oil (ANFO) is the specific explosive used by DNA in these tests. 6 refs., 6 figs.

  4. The large earthquake of 8 August 1303 in Crete: seismic scenario and tsunami in the Mediterranean area

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Comastri, Alberto

    By conducting a historical review of this large seismic event in the Mediterranean, it has been possible to identify both the epicentral area and the area in which its effects were principally felt. Ever since the nineteenth century, the seismological tradition has offered a variety of partial interpretations of the earthquake, depending on whether the main sources used were Arabic, Greek or Latin texts. Our systematic research has involved the analysis not only of Arab, Byzantine and Italian chronicle sources, but also and in particular of a large number of never previously used official and public authority documents, preserved in Venice in the State Archive, in the Marciana National Library and in the Library of the Museo Civico Correr. As a result, it has been possible to establish not only chronological parameters for the earthquake (they were previously uncertain) but also its overall effects (epicentral area in Crete, Imax XI MCS). Sources containing information in 41 affected localities and areas were identified. The earthquake also gave rise to a large tsunami, which scholars have seen as having certain interesting elements in common with that of 21 July 365, whose epicentre was also in Crete. As regards methodology, this research made it clear that knowledge of large historical earthquakes in the Mediterranean is dependent upon developing specialised research and going beyond the territorial limits of current national catalogues.

  5. Seismic hazard assessment based on the Unified Scaling Law for Earthquakes: the Greater Caucasus

    NASA Astrophysics Data System (ADS)

    Nekrasova, A.; Kossobokov, V. G.

    2015-12-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. The parameters A, B, and C of USLE are used to estimate, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters including macro-seismic intensity. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks (e.g., those based on the density of exposed population). The methodology of seismic hazard and risks assessment based on USLE is illustrated by application to the seismic region of Greater Caucasus.

  6. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  7. Stochastic pattern transitions in large scale swarms

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  8. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  9. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  10. Role of multifractal analysis in understanding the preparation zone for large size earthquake in the North-Western Himalaya region

    NASA Astrophysics Data System (ADS)

    Teotia, S. S.; Kumar, D.

    2011-02-01

    Seismicity has power law in space, time and magnitude distributions and same is expressed by the fractal dimension D, Omori's exponent p and b-value. The spatio-temporal patterns of epicenters have heterogeneous characteristics. As the crust gets self-organised into critical state, the spatio-temporal clustering of epicenters emerges to heterogeneous nature of seismicity. To understand the heterogeneous characteristics of seismicity in a region, multifractal studies hold promise to characterise the dynamics of region. Multifractal study is done on seismicity data of the North-Western Himalaya region which mainly involve seismogenic region of 1905 Kangra great earthquake in the North-Western Himalaya region. The seismicity data obtained from USGS catalogue for time period 1973-2009 has been analysed for the region which includes the October 2005 Muzafrabad-Kashmir earthquake (Mw =7.6). Significant changes have been observed in generalised dimension Dq, Dq spectra and b-value. The significant temporal changes in generalised dimension Dq, b-value and Dq-q spectra prior to occurrence of Muzaffrabad-Kashmir earthquake relates to distribution of epicenters in the region. The decrease in generalised dimension and b-value observed in our study show the relationship with the clustering of seismicity as is expected in self-organised criticality behaviour of earthquake occurrences. Such study may become important in understanding the preparation zone of large and great size earthquake in various tectonic regions.

  11. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Rouvreau, Sebastien; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant know how about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  12. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Cowlard, Adam J.; Rouvreau, Sebastien; Toth, Balazs; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  13. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  14. Evidence for large prehistoric earthquakes in the northern New Madrid Seismic Zone, central United States

    USGS Publications Warehouse

    Li, Y.; Schweig, E.S.; Tuttle, M.P.; Ellis, M.A.

    1998-01-01

    We surveyed the area north of New Madris, Missouri, for prehistoric liquefaction deposits and uncovered two new sites with evidence of pre-1811 earthquakes. At one site, located about 20 km northeast of New Madrid, Missouri, radiocarbon dating indicates that an upper sand blow was probably deposited after A.D. 1510 and a lower sand blow was deposited prior to A.D. 1040. A sand blow at another site about 45 km northeast of New Madrid, Missouri, is dated as likely being deposited between A.D.55 and A.D. 1620 and represents the northernmost recognized expression of prehistoric liquefaction likely related to the New Madrid seismic zone. This study, taken together with other data, supports the occurrence of at least two earthquakes strong enough to indcue liquefaction or faulting before A.D. 1811, and after A.D. 400. One earthquake probably occurred around AD 900 and a second earthquake occurred around A.D. 1350. The data are not yet sufficient to estimate the magnitudes of the causative earthquakes for these liquefaction deposits although we conclude that all of the earthquakes are at least moment magnitude M ~6.8, the size of the 1895 Charleston, Missouri, earthquake. A more rigorous estimate of the number and sizes of prehistoric earthquakes in the New Madrid sesmic zone awaits evaluation of additional sites.

  15. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  16. Large-scale mouse knockouts and phenotypes.

    PubMed

    Ramírez-Solis, Ramiro; Ryder, Edward; Houghton, Richard; White, Jacqueline K; Bottomley, Joanna

    2012-01-01

    Standardized phenotypic analysis of mutant forms of every gene in the mouse genome will provide fundamental insights into mammalian gene function and advance human and animal health. The availability of the human and mouse genome sequences, the development of embryonic stem cell mutagenesis technology, the standardization of phenotypic analysis pipelines, and the paradigm-shifting industrialization of these processes have made this a realistic and achievable goal. The size of this enterprise will require global coordination to ensure economies of scale in both the generation and primary phenotypic analysis of the mutant strains, and to minimize unnecessary duplication of effort. To provide more depth to the functional annotation of the genome, effective mechanisms will also need to be developed to disseminate the information and resources produced to the wider community. Better models of disease, potential new drug targets with novel mechanisms of action, and completely unsuspected genotype-phenotype relationships covering broad aspects of biology will become apparent. To reach these goals, solutions to challenges in mouse production and distribution, as well as development of novel, ever more powerful phenotypic analysis modalities will be necessary. It is a challenging and exciting time to work in mouse genetics.

  17. A possible scenario for earlier occurrence of the next Nankai earthquake due to triggering by an earthquake at Hyuga-nada, off southwest Japan

    NASA Astrophysics Data System (ADS)

    Hyodo, Mamoru; Hori, Takane; Kaneda, Yoshiyuki

    2016-01-01

    Several recent large-scale earthquakes including the 2011 Tohoku earthquake ( M w 9.0) in northeastern Japan and the 2014 Iquique earthquake ( M w 8.1) in northern Chile were associated with foreshock activities ( M w > 6). The detailed mechanisms between these large earthquakes and the preceding smaller earthquakes are still unknown; however, to plan for disaster mitigation against the anticipated great Nankai Trough earthquakes, in this study, possible scenarios after M w 7-class earthquakes that frequently occur near the focal region of the Nankai Trough are examined through quasi-dynamic modeling of seismic cycles. By assuming that simulated Nankai Trough earthquakes recur as two alternative earthquakes with variations in magnitudes ( M w 8.7-8.4) and recurrence intervals (178-143 years), we systematically examine the effect of the occurrence timing of the M w 7 Hyuga-nada earthquake on the western extension of the source region of Nankai Trough earthquakes on the assumed Nankai Trough seismic cycles. We find that in the latter half of a seismic cycle preceding a large Nankai Trough earthquake, an immature Nankai earthquake tends to be triggered within several years after the occurrence of a Hyuga-nada earthquake, then Tokai (Tonankai) earthquakes occur with maximum time lags of several years. The combined magnitudes of the triggered Nankai and subsequent Tokai (Tonankai) earthquakes become gradually larger with later occurrence of the Hyuga-nada earthquake, while the rupture timings between the Nankai and Tokai (Tonankai) earthquakes become smaller. The triggered occurrence of an immature Nankai Trough earthquake could delay the expected larger Nankai Trough earthquake to the next seismic cycle. Our results indicate that triggering can explain the variety and complexity of historical Nankai Trough earthquakes. Moreover, for the next anticipated event, countermeasures should include the possibility of a triggered occurrence of a Nankai Trough earthquake by an M

  18. Large-Scale Pattern Discovery in Music

    NASA Astrophysics Data System (ADS)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  19. Rare, large earthquakes at the laramide deformation front - Colorado (1882) and Wyoming (1984)

    USGS Publications Warehouse

    Spence, W.; Langer, C.J.; Choy, G.L.

    1996-01-01

    The largest historical earthquake known in Colorado occurred on 7 November 1882. Knowledge of its size, location, and specific tectonic environment is important for the design of critical structures in the rapidly growing region of the Southern Rocky Mountains. More than one century later, on 18 October 1984, an mb 5.3 earthquake occurred in the Laramie Mountains, Wyoming. By studying the 1984 earthquake, we are able to provide constraints on the location and size of the 1882 earthquake. Analysis of broadband seismic data shows the 1984 mainshock to have nucleated at a depth of 27.5 ?? 1.0 km and to have ruptured ???2.7 km updip, with a corresponding average displacement of about 48 cm and average stress drop of about 180 bars. This high stress drop may explain why the earthquake was felt over an area about 3.5 times that expected for a shallow earthquake of the same magnitude in this region. A microearthquake survey shows aftershocks to be just above the mainshock's rupture, mostly in a volume measuring 3 to 4 km across. Focal mechanisms for the mainshock and aftershocks have NE-SW-trending T axes, a feature shared by most earthquakes in western Colorado and by the induced Denver earthquakes of 1967. The only data for the 1882 earthquake were intensity reports from a heterogeneously distributed population. Interpretation of these reports also might be affected by ground-motion amplification from fluvial deposits and possible significant focal depth for the mainshock. The primary aftershock of the 1882 earthquake was felt most strongly in the northern Front Range, leading Kirkham and Rogers (1985) to locate the epicenters of the aftershock and mainshock there. The Front Range is a geomorphic extension of the Laramie Mountains. Both features are part of the eastern deformation front of the Laramide orogeny. Based on knowledge of regional tectonics and using intensity maps for the 1984 and the 1967 Denver earthquakes, we reinterpret prior intensity maps for the 1882

  20. Vulnerability of Eastern Caribbean Islands Economies to Large Earthquakes: The Trinidad and Tobago Case Study

    NASA Astrophysics Data System (ADS)

    Lynch, L.

    2015-12-01

    The economies of most of the Anglo-phone Eastern Caribbean islands have tripled to quadrupled in size since independence from England. There has also been commensurate growth in human and physical development as indicated by macro-economic indices such as Human Development Index and Fixed Capital Formation. A significant proportion of the accumulated wealth is invested in buildings and infrastructure which are highly susceptible to strong ground motion since the region is located along an active plate boundary. In the case of Trinidad and Tobago, Fixed Capital Formation accumulation since 1980 is almost US200 billion dollars. Recent studies have indicated that this twin island state is at significant risk from several seismic sources, both on land and offshore. To effectively mitigate the risk it is necessary to prescribe long-term measures such as the development and implementation of building code and standards, structural retrofitting, land use planning, preparedness planning and risk transfer mechanisms. The record has shown that Trinidad and Tobago has been been slow in the prescribing such measures which has consequently compounded it vulnerability to large earthquakes. This assessment reveals that the losses from a large (magnitude 7+) on land or an extreme (magnitude 8+) event could result in losses of up to US28B and that current risk transfer measures will only cater for less than ten percent of such losses.

  1. LDRD LW Project Final Report:Resolving the Earthquake Source Scaling Problem

    SciTech Connect

    Mayeda, K; Felker, S; Gok, R; O'Boyle, J; Walter, W R; Ruppert, S

    2004-02-10

    The scaling behavior of basic earthquake source parameters such as the energy release per unit area of fault slip, quantitatively measured as the apparent stress, is currently in dispute. There are compelling studies that show apparent stress is constant over a wide range of moments (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001, Ide et al. 2003). Other equally compelling studies find the apparent stress increases with moment (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001; Richardson and Jordan, 2002). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. As one part of our LDRD project we convened a one-day workshop on July 24, 2003 in Livermore to review the current state of knowledge on this topic and discuss possible methods of resolution with many of the world's foremost experts.

  2. Millennial-scale record of landslides in the Andes consistent with earthquake trigger

    NASA Astrophysics Data System (ADS)

    McPhillips, Devin; Bierman, Paul R.; Rood, Dylan H.

    2014-12-01

    Geologic records of landslide activity offer rare glimpses into landscapes evolving under the influence of tectonics and climate. Because the deposits of individual landslides are unlikely to be preserved, landslide activity in the geologic past is often reconstructed by extrapolating from historic landslide inventories. Landslide deposits have been interpreted as palaeoclimate proxies relating to changes in precipitation, although earthquakes can also trigger landslides. Here we measure cosmogenic 10Be concentrations in individual cobbles from the modern Quebrada Veladera river channel and an adjacent terrace in Peru and calculate erosion rates. We find, in conjunction with a 10Be production model, that the 10Be concentrations of each cobble population record erosion integrated over thousands of years and are consistent with a landslide origin for the cobbles. The distribution of 10Be concentrations in terrace cobbles produced during the relatively wet climate before about 16,000 years ago is indistinguishable from the distribution in river channel cobbles produced during the drier climate of the past few thousand years. This suggests that the amount of erosion from landslides has not changed in response to climatic changes. Instead, our integrated, millennial-scale record of landslides implies that earthquakes may be the primary landslide trigger in the arid foothills of Peru.

  3. Seismic hazard and risks based on the Unified Scaling Law for Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir; Nekrasova, Anastasia

    2014-05-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. Any kind of risk estimates R(g) at location g results from a convolution of the natural hazard H(g) with the exposed object under consideration O(g) along with its vulnerability V(O(g)). Note that g could be a point, or a line, or a cell on or under the Earth surface and that distribution of hazards, as well as objects of concern and their vulnerability, could be time-dependent. There exist many different risk estimates even if the same object of risk and the same hazard are involved. It may result from the different laws of convolution, as well as from different kinds of vulnerability of an object of risk under specific environments and conditions. Both conceptual issues must be resolved in a multidisciplinary problem oriented research performed by specialists in the fields of hazard, objects of risk, and object vulnerability, i.e. specialists in earthquake engineering, social sciences and economics. To illustrate this general concept, we first construct seismic hazard assessment maps based on the Unified Scaling Law for Earthquakes (USLE). The parameters A, B, and C of USLE, i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an area of linear size L, are used to estimate the expected maximum

  4. The Unified Scaling Law for Earthquakes in the Friuli Venezia Giulia Region

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Peresan, Antonella; Magrin, Andrea; Kossobokov, Vladimir

    2016-04-01

    The parameters of the Unified Scaling Law for Earthquakes (USLE) in the North Eastern part of Italy, namely in the Friuli Venezia Giulia Region (FVG) and its surroundings, have been studied. For this purpose, the updated and revised bulletins compiled at the National Institute of Oceanography and Experimental Geophysics, Centre of Seismological Research (OGS catalogue) has been used. In particular, we considered all magnitude 2.0 or larger earthquakes, which occurred in the time span 1994-2013 and within the territory of homogeneous completeness identified for the OGS data. The USLE parameters A, B and C have been evaluated at each of about 300 seismically active cells of 1/16°×1/16° size. The parameter A corresponds to the logarithmic estimate of seismic activity at magnitude 3.5, normalized to the unit area of 1°×1° and to the unit time of one year. The obtained values of the parameter A range between -0.9 to 0.2; these values correspond to an average occurrence rate for magnitude 3.5 earthquakes that varies in the range from one event in 8 years to one event every 7.5 months. The values of the coefficient of magnitude balance, parameter B, concentrate in the interval from just above 0.5 to 1.0. The fractal dimension of the earthquake epicenter locus, parameter C, spreads from 0.6 to 1.3. The obtained values of A, B, and C have been used to characterize the seismic hazard and risk for the territory under investigation, based on estimates of N(M) at each of the analysed cells. In fact, it has been shown that long-term estimates of the USLE coefficients permit to define seismic hazard maps in rather traditional terms of maximum expected magnitude, macroseismic intensity or other ground shaking parameters that can be derived from the computed magnitudes. Accordingly, preliminary estimates of the seismic hazard for the FVG region have been computed, at the level of 10% exceedance in 50 years, from the corresponding magnitude assessment based on the USLE. The

  5. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    USGS Publications Warehouse

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  6. Source processes at the Chilean subduction region: a comparative analysis of recent large earthquakes seismic sequences in Chile

    NASA Astrophysics Data System (ADS)

    Cesca, Simone; Tolga Sen, Ali; Dahm, Torsten

    2016-04-01

    Large intraplate megathrust events are common at the western margin of the Southamerican plate, and repeatedly affected the slab segment along Chile, driven by the subduction of the oceanic Nazca plate, with a convergence of almost 7 cm/y. The size and rate of seismicity, including the 1960 Mw 9.5 Chile earthquake, pose Chile among the most highly seismogenic regions worldwide. At the same time, thanks to the significant national and international effort in recent years, Chile is nowadays seismologically well equipped and monitored; the dense seismological network provides a valuable dataset to analyse details of the rupture processes not only for the main events, but also for weaker seismicity preceding, accompanying and following the largest earthquakes. The seismic sequences accompanying recent large earthquakes showed several differences. In some cases, as for the 2014 Iquique earthquake, an important precursor activity took place in the months preceding the main shock, with an accelerating pattern in the last days before the main shock. In other cases, as for the recent Illapel earthquake, the main shock occurred with few precursors. The 2010 Maule earthquake showed an even different patterns, with the activation of secondary faults after the main shock. Recent studies were able to resolve significant changes in specific source parameters, such as changes in the distribution of focal mechanisms, potentially revealing a rotation of the stress tensor, or a spatial variation of rupture velocity, supporting a depth dependence of the rupture speed. An advanced inversion of seismic source parameters and their combined interpretation for multiple sequences can help to understand the diversity of rupture processes along the Chilean slab, and in general for subduction environments. We combine here results of different recent studies to investigate similarity and anomalies of rupture parameters for different seismic sequences, and foreshocks-aftershocks activities

  7. Change in paleo-stress state before and after large earthquake, in the Chelung-pu fault, Taiwan

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Kota, T.; Yeh, E. C.; Lin, W.

    2014-12-01

    Stress state close to seismogenic fault is a key parameter to understand earthquake mechanics. Changes in stress state after large earthquakes were documented recently in the 1999 Chi-Chi earthquake, Taiwan, and 2011 Tohoku-Oki earthquake, Northeast Japan. If the temporal changes are common in the past and in the future, the change in paleostress related to large earthquakes are expected to be obtained from micro-faults preserved in outcrops or drilled cores. In this study, we show a change in paleostress from micro-fault slip data observed around the Chelung-pu fault in the Taiwan Chelung-pu fault Drilling Project (TCDP), which is possibly associated with the stress drop by large earthquakes along the Chelung-pu fault. Combining obtained stress orientations, stress ratio and stress polygons, stress magnitude for each stress state and difference in stress magnitude between obtained stresses are estimated. For stress inversion analysis, multiple inversion method (MIM, Yamaji et al., 2000) was carried out. To estimate the centers of clusters automatically, K-means clustering (Otsubo et al., 2006) was conducted on the result of MIM. In the result, four stress states were estimated. The stress states are named C1, C2, C3 and C4 in ascending order of stress ratio (Φ). Stress ratio is defined as (σ1-σ2) / (σ1-σ3). To constraint the stress magnitude, stress polygons are employed combining with the inverted stress states. The principal stress vectors for four stress states (C1-C4) was projected to the SHmax or the Shmin and vertical stress directions. SHmax is larger than Shmin as definition. Stress ratio was estimated by inversion method. Combining those conditions, a linear function in SHmax and Shmin space respected to Sv is obtained from inverted stress states. We obtained two groups of stress state from the slip data in the TCDP core. One stress state has WNW-ESE horizontal sigma1 and larger stress magnitude including reverse fault regime. Another stress state

  8. Evaluating the role of large earthquakes on aquifer dynamics using data fusion and knowledge discovery techniques

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Cox, Simon; Williams, Charles; Holden, Caroline

    2016-04-01

    Artificial adaptive systems are evaluated for their usefulness in modeling earthquake hydrology of the Canterbury region, NZ. For example, an unsupervised machine-learning technique, self-organizing map, is used to fuse about 200 disparate and sparse data variables (such as, well pressure response, ground acceleration, intensity, shaking, stress and strain; aquifer and well characteristics) associated with the M7.1 Darfield earthquake in 2010 and the M6.3 Christchurch earthquake in 2011. The strength of correlations, determined using cross-component plots, varied between earthquakes with pressure changes more strongly related to dynamic- than static stress-related variables during the M7.1 earthquake, and vice versa during the M6.3. The method highlights the importance of data distribution and that driving mechanisms of earthquake-induced pressure change in the aquifers are not straight forward to interpret. In many cases, data mining revealed that confusion and reduction in correlations are associated with multiple trends in the same plot: one for confined and one for unconfined earthquake response. The autocontractive map and minimum spanning tree techniques are used for grouping variables of similar influence on earthquake hydrology. K-means clustering of neural information identified 5 primary regions influenced by the two earthquakes. The application of genetic doping to a genetic algorithm is used for identifying optimal subsets of variables in formulating predictions of well pressures. Predictions of well pressure changes are compared and contrasted using machine-learning network and symbolic regression models with prediction uncertainty quantified using a leave-one-out cross-validation strategy. These preliminary results provide impetus for subsequent analysis with information from another 100 earthquakes that occurred across the South Island.

  9. Uplifted marine terraces in Davao Oriental Province, Mindanao Island, Philippines and their implications for large prehistoric offshore earthquakes along the Philippine trench

    NASA Astrophysics Data System (ADS)

    Ramos, Noelynna T.; Tsutsumi, Hiroyuki; Perez, Jeffrey S.; Bermas, Percival P.

    2012-02-01

    We conducted systematic mapping of Holocene marine terraces in eastern Mindanao Island, Philippines for the first time. Raised marine platforms along the 80-km-long coastline of eastern Davao Oriental Province are geomorphic evidence of tectonic deformation resulting from the westward subduction of the Philippine Sea plate along the Philippine trench. Holocene coral platforms consist of up to four terrace steps: T1: 1-5 m, T2: 3-6 m, T3: 6-10 m, and T4: 8-12 m amsl, from the lowest to highest, respectively. Terraces are subhorizontal, exposing cemented coral shingle and eroded coral heads, while terrace risers are 1-3 m high. Radiocarbon ages, 8080-4140 cal yr BP, reveal that erosional surfaces were carved onto the Holocene transgressive reef complex which grew upward until ˜8000 years ago. The maximum uplift rate is ˜1.5 mm/yr based on the highest Holocene terrace at <11.4 m amsl. The staircase topography and meter-scale terrace risers infer that at least four large earthquakes have uplifted the coast in the past ˜8000 years. The deformation pattern of the terraces further suggests that seismic sources are probably located offshore. However, historical earthquakes as large as M W 7.5 along the Philippine trench were not large enough to produce meter-scale coastal uplift, suggesting that much larger earthquakes occurred in the past. A long-term tectonic uplift rate of ˜1.3 mm/yr was also estimated based on Late Pleistocene terraces.

  10. Scaling relationship between corner frequencies and seismic moments of ultra micro earthquakes estimated with coda-wave spectral ratio -the Mponeng mine in South Africa

    NASA Astrophysics Data System (ADS)

    Wada, N.; Kawakata, H.; Murakami, O.; Doi, I.; Yoshimitsu, N.; Nakatani, M.; Yabe, Y.; Naoi, M. M.; Miyakawa, K.; Miyake, H.; Ide, S.; Igarashi, T.; Morema, G.; Pinder, E.; Ogasawara, H.

    2011-12-01

    Scaling relationship between corner frequencies, fc, and seismic moments, Mo is an important clue to understand the seismic source characteristics. Aki (1967) showed that Mo is proportional to fc-3 for large earthquakes (cubic law). Iio (1986) claimed breakdown of the cubic law between fc and Mo for smaller earthquakes (Mw < 2), and Gibowicz et al. (1991) also showed the breakdown for the ultra micro and small earthquakes (Mw < -2). However, it has been reported that the cubic law holds even for micro earthquakes (-1 < Mw > 4) by using high quality data observed at a deep borehole (Abercrombie, 1995; Ogasawara et al., 2001; Hiramatsu et al., 2002; Yamada et al., 2007). In order to clarify the scaling relationship for smaller earthquakes (Mw < -1), we analyzed ultra micro earthquakes using very high sampling records (48 kHz) of borehole seismometers installed within a hard rock at the Mponeng mine in South Africa. We used 4 tri-axial accelerometers of three-component that have a flat response up to 25 kHz. They were installed to be 10 to 30 meters apart from each other at 3,300 meters deep. During the period from 2008/10/14 to 2008/10/30 (17 days), 8,927 events were recorded. We estimated fc and Mo for 60 events (-3 < Mw < -1) within 200 meters from the seismometers. Assuming the Brune's source model, we estimated fc and Mo from spectral ratios. Common practice is using direct waves from adjacent events. However, there were only 5 event pairs with the distance between them less than 20 meters and Mw difference over one. In addition, the observation array is very small (radius less than 30 m), which means that effects of directivity and radiation pattern on direct waves are similar at all stations. Hence, we used spectral ratio of coda waves, since these effects are averaged and will be effectively reduced (Mayeda et al., 2007; Somei et al., 2010). Coda analysis was attempted only for relatively large 20 events (we call "coda events" hereafter) that have coda energy

  11. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  12. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  13. Nonlinear large-scale optimization with WORHP

    NASA Astrophysics Data System (ADS)

    Nikolayzik, Tim; Büskens, Christof; Gerdts, Matthias

    Nonlinear optimization has grown to a key technology in many areas of aerospace industry, e.g. satellite control, shape-optimization, aerodynamamics, trajectory planning, reentry prob-lems, interplanetary flights. One of the most extensive areas is the optimization of trajectories for aerospace applications. These problems typically are discretized optimal control problems, which leads to large sparse nonlinear optimization problems. In the end all these different problems from different areas can be described in the general formulation as a nonlinear opti-mization problem. WORHP is designed to solve nonlinear optimization problems with more then one million variables and one million constraints. WORHP uses a lot of different advanced techniques, e.g. reverse communication, to organize the optimization process as efficient and controllable by the user as possible. The solver has nine different interfaces, e.g. to MAT-LAB/SIMULINK and AMPL. Tests of WORHP had shown that WORHP is a very robust and promising solver. Several examples from space applications will be presented.

  14. Safeguards instruments for Large-Scale Reprocessing Plants

    SciTech Connect

    Hakkila, E.A.; Case, R.S.; Sonnier, C.

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  15. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable. PMID:25162863

  16. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  17. Demonstration of Mobile Auto-GPS for Large Scale Human Mobility Analysis

    NASA Astrophysics Data System (ADS)

    Horanont, Teerayut; Witayangkurn, Apichon; Shibasaki, Ryosuke

    2013-04-01

    The greater affordability of digital devices and advancement of positioning and tracking capabilities have presided over today's age of geospatial Big Data. Besides, the emergences of massive mobile location data and rapidly increase in computational capabilities open up new opportunities for modeling of large-scale urban dynamics. In this research, we demonstrate the new type of mobile location data called "Auto-GPS" and its potential use cases for urban applications. More than one million Auto-GPS mobile phone users in Japan have been observed nationwide in a completely anonymous form for over an entire year from August 2010 to July 2011 for this analysis. A spate of natural disasters and other emergencies during the past few years has prompted new interest in how mobile location data can help enhance our security, especially in urban areas which are highly vulnerable to these impacts. New insights gleaned from mining the Auto-GPS data suggest a number of promising directions of modeling human movement during a large-scale crisis. We question how people react under critical situation and how their movement changes during severe disasters. Our results demonstrate a case of major earthquake and explain how people who live in Tokyo Metropolitan and vicinity area behave and return home after the Great East Japan Earthquake on March 11, 2011.

  18. Borehole Water Level Measurements in Kamchatka and Broadband Records of Very Large (M≧7.6) Earthquakes

    NASA Astrophysics Data System (ADS)

    Kasimova, V.; Kopylova, G.

    2010-12-01

    The impact of seismic waves from distant very large earthquakes can be accompanied by various changes in the groundwater mode. Such effects are observed at distances up to thousands of kilometers from the epicenter and indicate a change in the stress-strain state of geological environment. One of the methods of geophysical monitoring of seismically active regions is the water level observations in the boreholes. Different variations of water level caused by the passage of seismic waves from the very large earthquakes are recorded in piezometric boreholes in Kamchatka. In connection with the very large earthquakes it was observed four types of variations of water level in borehole UZ-5 (Kamchatka, Russia). To quantify the impact of the characteristics of seismic waves on the state of groundwater can be used assess the amplitude and frequency of maximum phase ground motion (velocity, displacement and acceleration) according to the registration of seismic signals of broadband seismic instrumentation. The purpose of this study is to determine the dependence of expression of different types of variations of water level in borehole UZ-5 from the amplitude and frequency of seismic signals from the very large earthquakes recorded by IRIS seismic equipment on the seismic station Petropavlovsk (s/s PET). We used records of earthquakes since 1997, M≧7.6 and 10-minute data of water level meter observations on the borehole UZ-5. Analysis of seismic signals in the time and frequency-time domain with the assessment times, amplitudes and periods of maximum oscillation phases was carried out using the interactive software DIMAS. The restoration of initial ground motion (displacement, acceleration) was carried out. The evaluation of amplitudes and frequency content of maximum oscillation phases of ground and the comparison with the variations of water level in the hole UZ-5 was executed. Dependences of the amplitude-frequency content of maximum oscillation phases of ground

  19. Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.

    2016-08-01

    Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy-or bias-problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a

  20. Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.

    2016-08-01

    Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy—or bias—problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a

  1. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  2. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  3. A recovery of scattering environment in the crust after a large earthquake

    NASA Astrophysics Data System (ADS)

    Sugaya, K.; Hiramatsu, Y.; Furumoto, M.; Katao, H.

    2006-12-01

    A large earthquake generates defects such as small faults and cracks and changes scattering environment in and around its rupture zone by the static or the dynamic stress change. The defects are expected to recover with time. A time constant of the healing of the defects is a key parameter for the recurrence of a large earthquake. Coda waves consist mainly of scattered S-waves. The attenuation property of coda waves, coda Q-1 or Qc-1, reflects the scattering environment in the crust and is considered to be a good indicator of the stress condition in the crust (Aki, 1985, Hiramatsu et al., 2000). In the Tamba region, northeast of the rupture zone of the 1995 Hyogo-ken Nanbu earthquake (MJMA 7.3), Hiramatsu et al. (2000) reported a coseismic increase in Qc-1 at frequencies of 1.5-4 Hz and decrease in b-value due to the static stress change caused by the event. In this study, we investigate the temporal variation in Qc-1 and seismicity from 1997 to 2000 in the Tamba region, following the period of Hiramatsu et al. (2000), to check the recovery of Qc-1 at lower frequencies and the seismicity. We estimate Qc-1 for 10 frequency bands in a range of 1.5-24 Hz based on the single isotropic scattering model (Sato, 1977). We analyze the waveform data of 2812 shallow microearthquakes (M1.5-3) in the region. In order to examine the duration of high Qc-1, we divide the period after the event (1995-2000), including the data reported by Hiramatsu et al. (2000), into two periods of various time windows. The Student's t test confirms that a significant decrease in the mean values of Qc-1 at frequencies of 1.5- 4 Hz at 2-4 years after the event. This indicates that the values of Qc-1 at the lower frequencies return to those before the event for 2-4 years. The mean values of Qc-1 at 3 and 4 Hz, showing the largest significant variation (Hiramatsu et al., 2000), return to those before the event, in particular, for 2 years. There is no tectonic event that causes a stress change at the

  4. Irregularities in Early Seismic Rupture Propagation for Large Events in a Crustal Earthquake Model

    NASA Astrophysics Data System (ADS)

    Lapusta, N.; Rice, J. R.; Rice, J. R.

    2001-12-01

    We study early seismic propagation of model earthquakes in a 2-D model of a vertical strike-slip fault with depth-variable rate and state friction properties. Our model earthquakes are obtained in fully dynamic simulations of sequences of instabilities on a fault subjected to realistically slow tectonic loading (Lapusta et al., JGR, 2000). This work is motivated by results of Ellsworth and Beroza (Science, 1995), who observe that for many earthquakes, far-field velocity seismograms during initial stages of dynamic rupture propagation have irregular fluctuations which constitute a "seismic nucleation phase". In our simulations, we find that such irregularities in velocity seismograms can be caused by two factors: (1) rupture propagation over regions of stress concentrations and (2) partial arrest of rupture in neighboring creeping regions. As rupture approaches a region of stress concentration, it sees increasing background stress and its moment acceleration (to which velocity seismographs in the far field are proportional) increases. After the peak in stress concentration, the rupture sees decreasing background stress and moment acceleration decreases. Hence a fluctuation in moment acceleration is created. If rupture starts sufficiently far from a creeping region, then partial arrest of rupture in the creeping region causes a decrease in moment acceleration. As the other parts of rupture continue to develop, moment acceleration then starts to grow again, and a fluctuation again results. Other factors may cause the irregularities in moment acceleration, e.g., phenomena such as branching and/or intermittent rupture propagation (Poliakov et al., submitted to JGR, 2001) which we have not studied here. Regions of stress concentration are created in our model by arrest of previous smaller events as well as by interactions with creeping regions. One such region is deep in the fault zone, and is caused by the temperature-induced transition from seismogenic to creeping

  5. On the problem of earthquake correlation in space and time over large distances

    NASA Astrophysics Data System (ADS)

    Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.

    2012-04-01

    A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms

  6. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    PubMed Central

    L., Passarelli; E., Rivalta; A., Shuler

    2014-01-01

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process. PMID:24469260

  7. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes.

    PubMed

    Passarelli, L; Rivalta, E; Shuler, A

    2014-01-01

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process. PMID:24469260

  8. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes.

    PubMed

    Passarelli, L; Rivalta, E; Shuler, A

    2014-01-28

    As continental rifts evolve towards mid-ocean ridges, strain is accommodated by repeated episodes of faulting and magmatism. Discrete rifting episodes have been observed along two subaerial divergent plate boundaries, the Krafla segment of the Northern Volcanic Rift Zone in Iceland and the Manda-Hararo segment of the Red Sea Rift in Ethiopia. In both cases, the initial and largest dike intrusion was followed by a series of smaller intrusions. By performing a statistical analysis of these rifting episodes, we demonstrate that dike intrusions obey scaling relationships similar to earthquakes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, and the long-term release of geodetic moment is governed by a relationship consistent with the Omori law. Due to the effects of magma supply, the timing of secondary dike intrusions differs from that of the aftershocks. This work provides evidence of self-similarity in the rifting process.

  9. Project Medishare's Historic Haitian Earthquake Response.

    PubMed

    Greig, Elizabeth; Cornely, Cheryl Clark; Green, Barth A

    2015-06-01

    This article describes the immediate large-scale medical and surgical response of Project Medishare to the 2010 Haitian earthquake. It summarizes the rapid evolution of critical care and trauma capacity in a developing nation after earthquake and discusses the transition from acute trauma treatment to interdisciplinary health care sector building.

  10. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  11. Dynamic scaling and large scale effects in turbulence in compressible stratified fluid

    NASA Astrophysics Data System (ADS)

    Pharasi, Hirdesh K.; Bhattacharjee, Jayanta K.

    2016-01-01

    We consider the propagation of sound in a turbulent fluid which is confined between two horizontal parallel plates, maintained at different temperatures. In the homogeneous fluid, Staroselsky et al. had predicted a divergent sound speed at large length scales. Here we find a divergent sound speed and a vanishing expansion coefficient at large length scales. Dispersion relation and the question of scale invariance at large distance scales lead to these results.

  12. What caused a large number of fatalities in the Tohoku earthquake?

    NASA Astrophysics Data System (ADS)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  13. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  14. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  15. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  16. Large mid-Holocene and late Pleistocene earthquakes on the Oquirrh fault zone, Utah

    USGS Publications Warehouse

    Olig, S.S.; Lund, W.R.; Black, B.D.

    1994-01-01

    The Oquirrh fault zone is a range-front normal fault that bounds the east side of Tooele Valley and it has long been recognized as a potential source for large earthquakes that pose a significant hazard to population centers along the Wasatch Front in central Utah. Scarps of the Oquirrh fault zone offset the Provo shoreline of Lake Bonneville and previous studies of scarp morphology suggested that the most recent surface-faulting earthquake occurred between 9000 and 13,500 years ago. Based on a potential rupture length of 12 to 21 km from previous mapping, moment magnitude (Mw) estimates for this event range from 6.3 to 6.6 In contrast, our results from detailed mapping and trench excavations at two sites indicate that the most-recent event actually occurred between 4300 and 6900 yr B.P. (4800 and 7900 cal B.P.) and net vertical displacements were 2.2 to 2.7 m, much larger than expected considering estimated rupture lengths for this event. Empirical relations between magnitude and displacement yield Mw 7.0 to 7.2. A few, short discontinuous fault scarps as far south as Stockton, Utah have been identified in a recent mapping investigation and our results suggest that they may be part of the Oquirrh fault zone, increasing the total fault length to 32 km. These results emphasize the importance of integrating stratigraphic and geomorphic information in fault investigations for earthquake hazard evaluations. At both the Big Canyon and Pole Canyon sites, trenches exposed faulted Lake Bonneville sediments and thick wedges of fault-scarp derived colluvium associated with the most-recent event. Bulk sediment samples from a faulted debris-flow deposit at the Big Canyon site yield radiocarbon ages of 7650 ?? 90 yr B.P. and 6840 ?? 100 yr B.P. (all lab errors are ??1??). A bulk sediment sample from unfaulted fluvial deposits that bury the fault scarp yield a radiocarbon age estimate of 4340 ?? 60 yr B.P. Stratigraphic evidence for a pre-Bonneville lake cycle penultimate

  17. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  18. Apparent break in earthquake scaling due to path and site effects on deep borehole recordings

    USGS Publications Warehouse

    Ide, S.; Beroza, G.C.; Prejean, S.G.; Ellsworth, W.L.

    2003-01-01

    We reexamine the scaling of stress drop and apparent stress, rigidity times the ratio between seismically radiated energy to seismic moment, with earthquake size for a set of microearthquakes recorded in a deep borehole in Long Valley, California. In the first set of calculations, we assume a constant Q and solve for the corner frequency and seismic moment. In the second set of calculations, we model the spectral ratio of nearby events to determine the same quantities. We find that the spectral ratio technique, which can account for path and site effects or nonconstant Q, yields higher stress drops, particularly for the smaller events in the data set. The measurements determined from spectral ratios indicate no departure from constant stress drop scaling down to the smallest events in our data set (Mw 0.8). Our results indicate that propagation effects can contaminate measurements of source parameters even in the relatively clean recording environment of a deep borehole, just as they do at the Earth's surface. The scaling of source properties of microearthquakes made from deep borehole recordings may need to be reevaluated.

  19. The Mini-IPIP Scale: psychometric features and relations with PTSD symptoms of Chinese earthquake survivors.

    PubMed

    Li, Zhongquan; Sang, Zhiqin; Wang, Li; Shi, Zhanbiao

    2012-10-01

    The present purpose was to validate the Mini-IPIP scale, a short measure of the five-factor model personality traits, with a sample of Chinese earthquake survivors. A total of 1,563 participants, ages 16 to 85 years, completed the Mini-IPIP scale and a measure of posttraumatic stress disorder (PTSD) symptoms. Confirmatory factor analysis supported the five-factor structure of the Mini-IPIP with adequate values of various fit indices. This scale also showed values of internal consistency, Cronbach's alphas ranged from .79 to .84, and McDonald's omega ranged from .73 to .82 for scores on each subscale. Moreover, the five personality traits measured by the Mini-IPIP and those assessed by other big five measures had comparable patterns of relations with PTSD symptoms. Findings indicated that the Mini-IPIP is an adequate short-form of the Big-Five factors of personality, which is applicable with natural disaster survivors. PMID:23234106

  20. Scaling earthquake ground motions for performance-based assessment of buildings

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.

    2011-01-01

    The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.

  1. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  2. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  3. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  4. Exploring Earthquake Databases for the Creation of Magnitude-Homogeneous Catalogues: Tools for Application on a Regional and Global Scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-06-01

    The creation of a magnitude-homogenised catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenising multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins, and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilise this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonise magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonised into moment-magnitude to form a catalogue of more than 562,840 events. This extended catalogue, whilst not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  5. Analog earthquakes

    SciTech Connect

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.

  6. Preparation phase and consequences of a large earthquake: insights from foreshocks and aftershocks of the 2014 Mw 8.1 Iquique earthquake, Chile

    NASA Astrophysics Data System (ADS)

    Cesca, Simone; Grigoli, Francesco; Heimann, Sebastian; Dahm, Torsten

    2015-04-01

    The April 1, 2014, Mw 8.1 Iquique earthquake in Northern Chile, was preceded by an anomalous, extensive preparation phase. The precursor seismicity at the ruptured slab segment was observed sporadically several months before the main shock, with a significant increment in seismicity rates and observed magnitudes in the last three weeks before the main shock. The large dataset of regional recordings helped us to investigate the role of such precursor activity, comparing foreshock and aftershock seismicity to test models of rupture preparation and models of strain and stress rotation during an earthquake. We used full waveforms techniques to locate events, map the seismicity rate, derive source parameters, and assess spatiotemporal stress changes. Results indicate that the spatial distributions of foreshocks delineated the shallower part of the rupture areas of the main shock and its largest aftershock, and is well matching the spatial extension of the aftershocks. During the foreshock sequence, seismicity spatially is mainly localized in two clusters, separated by a region of high locking. The ruptures of mainshock and largest aftershock nucleate within these clusters and propagate to the locked region; the aftershocks are again localized in correspondence to the original spatial clusters, and the central region is locked again. More than 300 moment tensor inversions were performed, down to Mw 4.0, most of them corresponding to almost pure double couple thrust mechanisms, with a geometry consistent with the slab orientation. No significant differences are observed among thrust mechanisms in different areas, nor among thrust foreshocks and aftershocks. However, a new family of normal fault mechanisms appears after the main shock, likely affecting the shallow wedge structure in consequence of the increased extensional stress in this region. We infer a stress rotation after the main shock, as proposed for recent larger thrust earthquakes, which suggests that the April

  7. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  8. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  9. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  10. Large Scale Experiments on Lightweight Thrust Restraint for Buried Bend under Internal Pressure

    NASA Astrophysics Data System (ADS)

    Kawabata, Toshinori; Sawada, Yutaka; Mohri, Yoshiyuki

    In bends, unbalanced force, which is called thrust force, is generated. Generally a concrete block is installed at the bend to provide the lateral resistance. However, the heavy concrete block is weak point in earthquake. In our previous study, a lightweight thrust restraint using geogrid was suggested and the effect was proved by laboratory small tests. In the present study, the large-scale tests for the new method were carried out in a large pit (8.4m×5.4m×4m), using a pipe bend having a diameter 300mm. As the results the lateral displacement of the bend was reduced by the proposed method. In addition, it was revealed that the effect was depended on the stiffness, length and installation of geogrid.

  11. Rupture Dynamics and Scaling Behavior of Hydraulically Stimulated Micro-Earthquakes in a Shale Reservoir

    NASA Astrophysics Data System (ADS)

    Viegas, G. F.; Urbancic, T.; Baig, A. M.

    2014-12-01

    In hydraulic fracturing completion programs fluids are injected under pressure into fractured rock formations to open escape pathways for trapped hydrocarbons along pre-existing and newly generated fractures. To characterize the failure process, we estimate static and dynamic source and rupture parameters, such as dynamic and static stress drop, radiated energy, seismic efficiency, failure modes, failure plane orientations and dimensions, and rupture velocity to investigate the rupture dynamics and scaling relations of micro-earthquakes induced during a hydraulic fracturing shale completion program in NE British Columbia, Canada. The relationships between the different parameters combined with the in-situ stress field and rock properties provide valuable information on the rupture process giving insights into the generation and development of the fracture network. Approximately 30,000 micro-earthquakes were recorded using three multi-sensor arrays of high frequency geophones temporarily placed close to the treatment area at reservoir depth (~2km). On average the events have low radiated energy, low dynamic stress and low seismic efficiency, consistent with the obtained slow rupture velocities. Events fail in overshoot mode (slip weakening failure model), with fluids lubricating faults and decreasing friction resistance. Events occurring in deeper formations tend to have faster rupture velocities and are more efficient in radiating energy. Variations in rupture velocity tend to correlate with variation in depth, fault azimuth and elapsed time, reflecting a dominance of the local stress field over other factors. Several regions with different characteristic failure modes are identifiable based on coherent stress drop, seismic efficiency, rupture velocities and fracture orientations. Variations of source parameters with rock rheology and hydro-fracture fluids are also observed. Our results suggest that the spatial and temporal distribution of events with similar

  12. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  13. Response of human populations to large-scale emergencies

    NASA Astrophysics Data System (ADS)

    Bagrow, James; Wang, Dashun; Barabási, Albert-László

    2010-03-01

    Until recently, little quantitative data regarding collective human behavior during dangerous events such as bombings and riots have been available, despite its importance for emergency management, safety and urban planning. Understanding how populations react to danger is critical for prediction, detection and intervention strategies. Using a large telecommunications dataset, we study for the first time the spatiotemporal, social and demographic response properties of people during several disasters, including a bombing, a city-wide power outage, and an earthquake. Call activity rapidly increases after an event and we find that, when faced with a truly life-threatening emergency, information rapidly propagates through a population's social network. Other events, such as sports games, do not exhibit this propagation.

  14. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986

  15. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  16. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  17. Large-scale convective instability in an electroconducting medium with small-scale helicity

    SciTech Connect

    Kopp, M. I.; Tur, A. V.; Yanovsky, V. V.

    2015-04-15

    A large-scale instability occurring in a stratified conducting medium with small-scale helicity of the velocity field and magnetic fields is detected using an asymptotic many-scale method. Such a helicity is sustained by small external sources for small Reynolds numbers. Two regimes of instability with zero and nonzero frequencies are detected. The criteria for the occurrence of large-scale instability in such a medium are formulated.

  18. A Cloud Computing Platform for Large-Scale Forensic Computing

    NASA Astrophysics Data System (ADS)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  19. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  20. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  1. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  2. When and where the aftershock activity was depressed: Contrasting decay patterns of the proximate large earthquakes in southern California

    USGS Publications Warehouse

    Ogata, Y.; Jones, L.M.; Toda, S.

    2003-01-01

    Seismic quiescence has attracted attention as a possible precursor to a large earthquake. However, sensitive detection of quiescence requires accurate modeling of normal aftershock activity. We apply the epidemic-type aftershock sequence (ETAS) model that is a natural extension of the modified Omori formula for aftershock decay, allowing further clusters (secondary aftershocks) within an aftershock sequence. The Hector Mine aftershock activity has been normal, relative to the decay predicted by the ETAS model during the 14 months of available data. In contrast, although the aftershock sequence of the 1992 Landers earthquake (M = 7.3), including the 1992 Big Bear earthquake (M = 6.4) and its aftershocks, fits very well to the ETAS up until about 6 months after the main shock, the activity showed clear lowering relative to the modeled rate (relative quiescence) and lasted nearly 7 years, leading up to the Hector Mine earthquake (M = 7.1) in 1999. Specifically, the relative quiescence occurred only in the shallow aftershock activity, down to depths of 5-6 km. The sequence of deeper events showed clear, normal aftershock activity well fitted to the ETAS throughout the whole period. We argue several physical explanations for these results. Among them, we strongly suspect aseismic slips within the Hector Mine rupture source that could inhibit the crustal relaxation process within "shadow zones" of the Coulomb's failure stress change. Furthermore, the aftershock activity of the 1992 Joshua Tree earthquake (M = 6.1) sharply lowered in the same day of the main shock, which can be explained by a similar scenario.

  3. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  4. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  5. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  6. Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-10-23

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects. PMID:26551120

  7. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    SciTech Connect

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  8. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGESBeta

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  9. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  10. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  11. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  12. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  13. Contribution of the infrasound technology to characterize large scale atmospheric disturbances and impact on infrasound monitoring

    NASA Astrophysics Data System (ADS)

    Blanc, Elisabeth; Le Pichon, Alexis; Ceranna, Lars; Pilger, Christoph; Charlton Perez, Andrew; Smets, Pieter

    2016-04-01

    The International Monitoring System (IMS) developed for the verification of the Comprehensive nuclear-Test-Ban Treaty (CTBT) provides a unique global description of atmospheric disturbances generating infrasound such as extreme events (e.g. meteors, volcanoes, earthquakes, and severe weather) or human activity (e.g. explosions and supersonic airplanes). The analysis of the detected signals, recorded at global scales and over near 15 years at some stations, demonstrates that large-scale atmospheric disturbances strongly affect infrasound propagation. Their time scales vary from several tens of minutes to hours and days. Their effects are in average well resolved by the current model predictions; however, accurate spatial and temporal description is lacking in both weather and climate models. This study reviews recent results using the infrasound technology to characterize these large scale disturbances, including (i) wind fluctuations induced by gravity waves generating infrasound partial reflections and modifications of the infrasound waveguide, (ii) convection from thunderstorms and mountain waves generating gravity waves, (iii) stratospheric warming events which yield wind inversions in the stratosphere, (iv)planetary waves which control the global atmospheric circulation. Improved knowledge of these disturbances and assimilation in future models is an important objective of the ARISE (Atmospheric dynamics Research InfraStructure in Europe) project. This is essential in the context of the future verification of the CTBT as enhanced atmospheric models are necessary to assess the IMS network performance in higher resolution, reduce source location errors, and improve characterization methods.

  14. Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation

    USGS Publications Warehouse

    Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.

    2000-01-01

    We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.

  15. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  16. New insights into Kilauea's volcano dynamics brought by large-scale relative relocation of microearthquakes

    USGS Publications Warehouse

    Got, J.-L.; Okubo, P.

    2003-01-01

    We investigated the microseismicity recorded in an active volcano to infer information concerning the volcano structure and long-term dynamics, by using relative relocations and focal mechanisms of microearthquakes. There were 32,000 earthquakes of the Mauna Loa and Kilauea volcanoes recorded by more than eight stations of the Hawaiian Volcano Observatory seismic network between 1988 and 1999. We studied 17,000 of these events and relocated more than 70%, with an accuracy ranging from 10 to 500 m. About 75% of these relocated events are located in the vicinity of subhorizontal decollement planes, at a depth of 8-11 km. However, the striking features revealed by these relocation results are steep southeast dipping fault planes working as reverse faults, clearly located below the decollement plane and which intersect it. If this decollement plane coincides with the pre-Mauna Loa seafloor, as hypothesized by numerous authors, such reverse faults rupture the pre-Mauna Loa oceanic crust. The weight of the volcano and pressure in the magma storage system are possible causes of these ruptures, fully compatible with the local stress tensor computed by Gillard et al. [1996]. Reverse faults are suspected of producing scarps revealed by kilometer-long horizontal slip-perpendicular lineations along the decollement surface and therefore large-scale roughness, asperities, and normal stress variations. These are capable of generating stick-slip, large-magnitude earthquakes, the spatial microseismic pattern observed in the south flank of Kilauea volcano, and Hilina-type instabilities. Rupture intersecting the decollement surface, causing its large-scale roughness, may be an important parameter controlling the growth of Hawaiian volcanoes.

  17. Earthquake Response Modeling for a Parked and Operating Megawatt-Scale Wind Turbine

    SciTech Connect

    Prowell, I.; Elgamal, A.; Romanowitz, H.; Duggan, J. E.; Jonkman, J.

    2010-10-01

    Demand parameters for turbines, such as tower moment demand, are primarily driven by wind excitation and dynamics associated with operation. For that purpose, computational simulation platforms have been developed, such as FAST, maintained by the National Renewable Energy Laboratory (NREL). For seismically active regions, building codes also require the consideration of earthquake loading. Historically, it has been common to use simple building code approaches to estimate the structural demand from earthquake shaking, as an independent loading scenario. Currently, International Electrotechnical Commission (IEC) design requirements include the consideration of earthquake shaking while the turbine is operating. Numerical and analytical tools used to consider earthquake loads for buildings and other static civil structures are not well suited for modeling simultaneous wind and earthquake excitation in conjunction with operational dynamics. Through the addition of seismic loading capabilities to FAST, it is possible to simulate earthquake shaking in the time domain, which allows consideration of non-linear effects such as structural nonlinearities, aerodynamic hysteresis, control system influence, and transients. This paper presents a FAST model of a modern 900-kW wind turbine, which is calibrated based on field vibration measurements. With this calibrated model, both coupled and uncoupled simulations are conducted looking at the structural demand for the turbine tower. Response is compared under the conditions of normal operation and potential emergency shutdown due the earthquake induced vibrations. The results highlight the availability of a numerical tool for conducting such studies, and provide insights into the combined wind-earthquake loading mechanism.

  18. Foreshocks and short-term hazard assessment of large earthquakes using complex networks: the case of the 2009 L'Aquila earthquake

    NASA Astrophysics Data System (ADS)

    Daskalaki, Eleni; Spiliotis, Konstantinos; Siettos, Constantinos; Minadakis, Georgios; Papadopoulos, Gerassimos A.

    2016-08-01

    The monitoring of statistical network properties could be useful for the short-term hazard assessment of the occurrence of mainshocks in the presence of foreshocks. Using successive connections between events acquired from the earthquake catalog of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) for the case of the L'Aquila (Italy) mainshock (Mw = 6.3) of 6 April 2009, we provide evidence that network measures, both global (average clustering coefficient, small-world index) and local (betweenness centrality) ones, could potentially be exploited for forecasting purposes both in time and space. Our results reveal statistically significant increases in the topological measures and a nucleation of the betweenness centrality around the location of the epicenter about 2 months before the mainshock. The results of the analysis are robust even when considering either large or off-centered the main event space windows.

  19. Continuous, Large-Scale Processing of Seismic Archives for High-Resolution Monitoring of Seismic Activity and Seismogenic Properties

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2012-12-01

    Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building

  20. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  1. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  2. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  3. Tsunami Sceanarios from Large Earthquakes in the NE Atlantic: the Gloria Fault and the Southwest Iberia Margin case studies

    NASA Astrophysics Data System (ADS)

    Baptista, Maria Ana; Omira, Rachid; Miranda, Jorge Miguel; Batllo, Josep; Lourenço, Nuno

    2013-04-01

    In the North East Atlantic (NEA) basin, the threat of tsunami of tectonic origin comes from regional sources located in the South West Iberian Margin (SWIM), far-field sources on the Gloria fault and transoceanic tsunamis from the Caribbean region. SWIM and Gloria source areas were responsible for tsunamigenic earthquakes that affected the coasts of NEA basin. The 1755.11.01 and the 1941.11.26 events remain the most well-known (historical and instrumental) tsunamis in these areas. The SWIM area is the most active seismic area in the NEA basin. It WAS the place of several events in historical times, namely: the 60 B.C. tsunami which reported to flood Portugal and Galicia coasts and the 382 AD tsunami that impacted Portugal and the Atlantic coasts of Morocco and Spain. Recently, the 1969.02.28 earthquake triggered a small tsunami recorded in the tide-gauge network of the area. Among the historical events, of the SWIM region, the November, 1st, 1755 tsunami is probably the most destructive in the history of Europe. The Gloria fault is a segment of the Eurasia-Nubia plate boundary. This is a large strike slip fault, located between 24W and 19W, with scarce seismic activity. Nonetheless, it is the location of several large earthquakes that caused tsunamis, namely the 1941.11.26 earthquake with a magnitude of 8.3 and the 7.9 magnitude earthquake of 1975.05.26. In 1941, the sea overtopped some beaches in the north coast of Portugal; during the 1975 event, eyewitness observations report the fast withdraw of the sea and the subsequent influx over the highest water mark. In this paper, we compute far-field and regional tsunami impact in the NEA Basin based on hydrodynamic simulations of two case studies representing the worst case scenarios for SWIM and Gloria. Both scenarios correspond to the largest earthquakes expected to occur along in these areas. These scenarios are consistent with the two past events of November, 1st, 1755 and of November, 24th, 1941. We assess

  4. Large and great earthquakes in the Shillong plateau-Assam valley area of Northeast India Region: Pop-up and transverse tectonics

    NASA Astrophysics Data System (ADS)

    Kayal, J. R.; Arefiev, S. S.; Baruah, Saurabh; Hazarika, D.; Gogoi, N.; Gautam, J. L.; Baruah, Santanu; Dorbath, C.; Tatevossian, R.

    2012-04-01

    The tectonic model of the Shillong plateau and Assam valley in the northeast India region, the source area for the 1897 great earthquake (Ms ~ 8.7) and for the four (1869, 1923, 1930 and 1943) large earthquakes (M. ≥ 7.0), is examined using the high precision data of a 20-station broadband seismic network. About 300 selected earthquakes M ≥ 3.0 recorded during 2001-2009 are analysed to study the seismicity and fault plane solutions. The dominating thrust/reverse faulting earthquakes in the western plateau may be explained by the proposed pop-up tectonics between two active boundary faults, the Oldham-Brahmaputra fault to the north and the Dapsi-Dauki thrust to the south, though the northern boundary fault is debated. The more intense normal and strike-slip faulting earthquakes in the eastern plateau (Mikir massif) and in the Assam valley, on the other hand, are well explained by transverse tectonics at the long and deep rooted Kopili fault that cuts across the Himalaya and caused the 2009 Bhutan earthquake (Mw 6.3). It is conjectured that the complex tectonics of the Shillong plateau and transverse tectonics at the Kopili fault make the region vulnerable for impending large earthquake(s).

  5. Scaling Transition in Earthquake Sources: A Possible Link Between Seismic and Laboratory Measurements

    NASA Astrophysics Data System (ADS)

    Malagnini, Luca; Mayeda, Kevin; Nielsen, Stefan; Yoo, Seung-Hoon; Munafo', Irene; Rawles, Christopher; Boschi, Enzo

    2014-10-01

    We estimate the corner frequencies of 20 crustal seismic events from mainshock-aftershock sequences in different tectonic environments (mainshocks 5.7 < M W < 7.6) using the well-established seismic coda ratio technique ( Mayeda et al. in Geophys Res Lett 34:L11303, 2007; Mayeda and Malagnini in Geophys Res Lett, 2010), which provides optimal stability and does not require path or site corrections. For each sequence, we assumed the Brune source model and estimated all the events' corner frequencies and associated apparent stresses following the MDAC spectral formulation of Walter and Taylor (A revised magnitude and distance amplitude correction (MDAC2) procedure for regional seismic discriminants, 2001), which allows for the possibility of non-self-similar source scaling. Within each sequence, we observe a systematic deviation from the self-similar line, all data being rather compatible with , where ɛ > 0 ( Kanamori and Rivera in Bull Seismol Soc Am 94:314-319, 2004). The deviation from a strict self-similar behavior within each earthquake sequence of our collection is indicated by a systematic increase in the estimated average static stress drop and apparent stress with increasing seismic moment (moment magnitude). Our favored physical interpretation for the increased apparent stress with earthquake size is a progressive frictional weakening for increasing seismic slip, in agreement with recent results obtained in laboratory experiments performed on state-of-the-art apparatuses at slip rates of the order of 1 m/s or larger. At smaller magnitudes ( M W < 5.5), the overall data set is characterized by a variability in apparent stress of almost three orders of magnitude, mostly from the scatter observed in strike-slip sequences. Larger events ( M W > 5.5) show much less variability: about one order of magnitude. It appears that the apparent stress (and static stress drop) does not grow indefinitely at larger magnitudes: for example, in the case of the Chi

  6. Dynamic triggering potential of large earthquakes recorded by the EarthScope U.S. Transportable Array using a frequency domain detection method

    NASA Astrophysics Data System (ADS)

    Linville, L. M.; Pankow, K. L.; Kilb, D. L.; Velasco, A. A.; Hayward, C.

    2013-12-01

    Because of the abundance of data from the Earthscope U.S. Transportable Array (TA), data paucity and station sampling bias in the US are no longer significant obstacles to understanding some of the physical parameters driving dynamic triggering. Initial efforts to determine locations of dynamic triggering in the US following large earthquakes (M ≥ 8.0) during TA relied on a time domain detection algorithm which used an optimized short-term average to long-term average (STA/LTA) filter and resulted in an unmanageably large number of false positive detections. Specific site sensitivities and characteristic noise when coupled with changes in detection rates often resulted in misleading output. To navigate this problem, we develop a frequency domain detection algorithm that first pre-whitens each seismogram and then computes a broadband frequency stack of the data using a three hour time window beginning at the origin time of the mainshock. This method is successful because of the broadband nature of earthquake signals compared with the more band-limited high frequency picks that clutter results from time domain picking algorithms. Preferential band filtering of the frequency stack for individual events can further increase the accuracy and drive the detection threshold to below magnitude one, but at general cost to detection levels across large scale data sets. Of the 15 mainshocks studied, 12 show evidence of discrete spatial clusters of local earthquake activity occurring within the array during the mainshock coda. Most of this activity is in the Western US with notable sequences in Northwest Wyoming, Western Texas, Southern New Mexico and Western Montana. Repeat stations (associated with 2 or more mainshocks) are generally rare, but when occur do so exclusively in California and Nevada. Notably, two of the most prolific regions of seismicity following a single mainshock occur following the 2009 magnitude 8.1 Samoa (Sep 29, 2009, 17:48:10) event, in areas with few

  7. Dynamic Source Inversion of an Intraslab Earthquake: a Slow and Inefficient Rupture with Large Stress Drop and Radiated Energy

    NASA Astrophysics Data System (ADS)

    Cruz-Atienza, V. M.; Diaz-Mojica, J.; Madariaga, R. I.; Singh, S. K.; Tago Pacheco, J.; Iglesias, A.

    2014-12-01

    We introduce a method for imaging the earthquake source dynamics through the inversion of ground motion records based on a parallel genetic algorithm. The source model follows an elliptical patch approach and uses the staggered-grid split-node method to model the earthquake dynamics. A statistical analysis is used to estimate uncertainties in both inverted and derived source parameters. Synthetic inversion tests reveal that the rupture speed (Vr), the rupture area and the stress drop (Δτ) are determined within an error of ~30%, ~12% and ~10%, respectively. In contrast, derived parameters such as the radiated energy (Er), the radiation efficiency (η) and the fracture energy (G) have larger uncertainties, around ~70%, ~40% and ~25%, respectively. We applied the method to the Mw6.5 intermediate-depth (62 km) normal-faulting earthquake of December 11, 2011 in Guerrero, Mexico (Diaz-Mojica et al., JGR, 2014). Inferred values of Δτ = 29.2±6.2 MPa and η = 0.26±0.1 are significantly higher and lower, respectively, than those of typical subduction thrust events. Fracture energy is large, so that more than 73% of the available potential energy for the dynamic process of faulting was deposited in the focal region (i.e., G = (14.4±3.5)x1014J), producing a slow rupture process (Vr/Vs = 0.47±0.09) despite the relatively-high energy radiation (Er = (0.54±0.31)x1015 J) and energy-moment ratio (Er/M0 = 5.7x10-5). It is interesting to point out that such a slow and inefficient rupture along with the large stress drop in a small focal region are features also observed in the 1994 deep Bolivian earthquake.

  8. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  9. Fast Moment Tensor Inversion for Large Earthquakes using the Mexican Accelerographic Network

    NASA Astrophysics Data System (ADS)

    Juarez, A.; Ramirez-Guzman, L.

    2015-12-01

    The moment tensor calculation that is computed immediately after the occurrence of a major earthquake is limited to the number of unsaturated records in stations near the epicenter and the number of stations that transmit their data in real-time. Accelerographic records, however, are not commonly saturated after major earthquakes. Taking advantage of the wide coverage of the Mexican Accelerographic Network, we use accelerograms observed in real time to compute moment tensor solutions after the occurrence of an earthquake. In our study, we compute the double-couple moment tensor inversion as a least squares problem by minimizing the misfit between synthetic waveforms in three components and observed waveforms. Synthetic Receiver Green's Tensors for each station of the network within the model were previously calculated using a three-dimensional model of South-central Mexico. The database has horizontal spatial resolution of 20 km and a depth spatial resolution of 5 km. Our procedure fits windows containing the P and S waves to compute a fast first-solution. A revised solution is then calculated by fitting the full record. A first solution can be obtained seconds after the P-wave recorded in the station closest to the epicenter. Our results show that it is possible to obtain the moment tensor solution quickly and accurately. Furthermore, we show the resolution and range of uncertainty of the moment tensor solutions compared with that reported by specialized agencies for 30 selected strong earthquakes in Mexico from 2010 to 2014.

  10. Determination of Δσand κ0 from response spectra of large earthquakes in Greece

    USGS Publications Warehouse

    Margaris, B.N.; Boore, D.M.

    1998-01-01

    We fit an ω−2 model to response spectra from eight recent Greek earthquakes ranging in size from M = 5.8 to M = 6.9. The diminution parameter κ0 was determined for each site, with a value near 0.06 for a typical soil site. The stress parameter (Δσ) showed little variation from earthquake to earthquake and had a mean value of 56 bars over all earthquakes. Predictions of peak velocity, peak acceleration, rupture duration, and fault length using the derived stress parameters are consistent with observations. Frequency-dependent site amplifications were included in all estimates; the combined effect of amplification and attenuation had a maximum value close to a factor of 2.5 for a typical soil site, relative to the motions at the surface of a perfectly elastic uniform half-space composed of materials near the source. The results form the foundation for predictions of strong motions in Greece for distances and magnitudes other than those for which data are available.