75 FR 65385 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-22
... Earthquake Engineering Simulation (NEES). SUMMARY: In compliance with the requirement of section 3506(c)(2)(A... of the Network for Earthquake Engineering Simulation. Type of Information Collection Request: New... inform decision making regarding the future of NSF support for earthquake engineering research...
Engineering uses of physics-based ground motion simulations
Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.
2014-01-01
This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.
NASA Astrophysics Data System (ADS)
Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei
2017-03-01
Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.
NASA Astrophysics Data System (ADS)
Karimzadeh, Shaghayegh; Askan, Aysegul; Yakut, Ahmet
2017-09-01
Simulated ground motions can be used in structural and earthquake engineering practice as an alternative to or to augment the real ground motion data sets. Common engineering applications of simulated motions are linear and nonlinear time history analyses of building structures, where full acceleration records are necessary. Before using simulated ground motions in such applications, it is important to assess those in terms of their frequency and amplitude content as well as their match with the corresponding real records. In this study, a framework is outlined for assessment of simulated ground motions in terms of their use in structural engineering. Misfit criteria are determined for both ground motion parameters and structural response by comparing the simulated values against the corresponding real values. For this purpose, as a case study, the 12 November 1999 Duzce earthquake is simulated using stochastic finite-fault methodology. Simulated records are employed for time history analyses of frame models of typical residential buildings. Next, the relationships between ground motion misfits and structural response misfits are studied. Results show that the seismological misfits around the fundamental period of selected buildings determine the accuracy of the simulated responses in terms of their agreement with the observed responses.
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Goulet, C.; Silva, F.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2015-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100Hz) ground motions for earthquakes at regional scales. The BBP scientific software modules implement kinematic rupture generation, low and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, seismogram ground motion amplitude calculations, and goodness of fit measurements. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground motion seismograms, using multiple alternative ground motion simulation methods, and software utilities that can generate plots, charts, and maps. The BBP has been developed over the last five years in a collaborative scientific, engineering, and software development project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The SCEC BBP software released in 2015 can be compiled and run on recent Linux systems with GNU compilers. It includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, updated ground motion simulation methods, and a simplified command line user interface.
NASA Astrophysics Data System (ADS)
Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai
2018-01-01
Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.
De La Flor, Grace; Ojaghi, Mobin; Martínez, Ignacio Lamata; Jirotka, Marina; Williams, Martin S; Blakeborough, Anthony
2010-09-13
When transitioning local laboratory practices into distributed environments, the interdependent relationship between experimental procedure and the technologies used to execute experiments becomes highly visible and a focal point for system requirements. We present an analysis of ways in which this reciprocal relationship is reconfiguring laboratory practices in earthquake engineering as a new computing infrastructure is embedded within three laboratories in order to facilitate the execution of shared experiments across geographically distributed sites. The system has been developed as part of the UK Network for Earthquake Engineering Simulation e-Research project, which links together three earthquake engineering laboratories at the universities of Bristol, Cambridge and Oxford. We consider the ways in which researchers have successfully adapted their local laboratory practices through the modification of experimental procedure so that they may meet the challenges of coordinating distributed earthquake experiments.
Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin
2015-01-01
Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.
The Electronic Encyclopedia of Earthquakes
NASA Astrophysics Data System (ADS)
Benthien, M.; Marquis, J.; Jordan, T.
2003-12-01
The Electronic Encyclopedia of Earthquakes is a collaborative project of the Southern California Earthquake Center (SCEC), the Consortia of Universities for Research in Earthquake Engineering (CUREE) and the Incorporated Research Institutions for Seismology (IRIS). This digital library organizes earthquake information online as a partner with the NSF-funded National Science, Technology, Engineering and Mathematics (STEM) Digital Library (NSDL) and the Digital Library for Earth System Education (DLESE). When complete, information and resources for over 500 Earth science and engineering topics will be included, with connections to curricular materials useful for teaching Earth Science, engineering, physics and mathematics. Although conceived primarily as an educational resource, the Encyclopedia is also a valuable portal to anyone seeking up-to-date earthquake information and authoritative technical sources. "E3" is a unique collaboration among earthquake scientists and engineers to articulate and document a common knowledge base with a shared terminology and conceptual framework. It is a platform for cross-training scientists and engineers in these complementary fields and will provide a basis for sustained communication and resource-building between major education and outreach activities. For example, the E3 collaborating organizations have leadership roles in the two largest earthquake engineering and earth science projects ever sponsored by NSF: the George E. Brown Network for Earthquake Engineering Simulation (CUREE) and the EarthScope Project (IRIS and SCEC). The E3 vocabulary and definitions are also being connected to a formal ontology under development by the SCEC/ITR project for knowledge management within the SCEC Collaboratory. The E3 development system is now fully operational, 165 entries are in the pipeline, and the development teams are capable of producing 20 new, fully reviewed encyclopedia entries each month. Over the next two years teams will complete 450 entries, which will populate the E3 collection to a level that fully spans earthquake science and engineering. Scientists, engineers, and educators who have suggestions for content to be included in the Encyclopedia can visit www.earthquake.info now to complete the "Suggest a Web Page" form.
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.
2014-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
NASA Technical Reports Server (NTRS)
Scholl, R. E. (Editor)
1979-01-01
Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2016-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.
NASA Astrophysics Data System (ADS)
Mourhatch, Ramses
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis. As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California. Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2s-2.0s) empirical Green's function synthetics on top of long-period (> 2.0s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
The Australian Computational Earth Systems Simulator
NASA Astrophysics Data System (ADS)
Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.
2001-12-01
Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
76 FR 42750 - National Science Board: Sunshine Act Meetings; Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
...) Update NSB Information Item: Network for Earthquake Engineering Simulation (NEES) Update NSB Information... Teleconference Discussion on the Timeline, Process and Procedures for Evaluating Nominees Update on Committee...
Adjoint-tomography for a Local Surface Structure: Methodology and a Blind Test
NASA Astrophysics Data System (ADS)
Kubina, Filip; Michlik, Filip; Moczo, Peter; Kristek, Jozef; Stripajova, Svetlana
2017-04-01
We have developed a multiscale full-waveform adjoint-tomography method for local surface sedimentary structures with complicated interference wavefields. The local surface sedimentary basins and valleys are often responsible for anomalous earthquake ground motions and corresponding damage in earthquakes. In many cases only relatively small number of records of a few local earthquakes is available for a site of interest. Consequently, prediction of earthquake ground motion at the site has to include numerical modeling for a realistic model of the local structure. Though limited, the information about the local structure encoded in the records is important and irreplaceable. It is therefore reasonable to have a method capable of using the limited information in records for improving a model of the local structure. A local surface structure and its interference wavefield require a specific multiscale approach. In order to verify our inversion method, we performed a blind test. We obtained synthetic seismograms at 8 receivers for 2 local sources, complete description of the sources, positions of the receivers and material parameters of the bedrock. We considered the simplest possible starting model - a homogeneous halfspace made of the bedrock. Using our inversion method we obtained an inverted model. Given the starting model, synthetic seismograms simulated for the inverted model are surprisingly close to the synthetic seismograms simulated for the true structure in the target frequency range up to 4.5 Hz. We quantify the level of agreement between the true and inverted seismograms using the L2 and time-frequency misfits, and, more importantly for earthquake-engineering applications, also using the goodness-of-fit criteria based on the earthquake-engineering characteristics of earthquake ground motion. We also verified the inverted model for other source-receiver configurations not used in the inversion.
Important Earthquake Engineering Resources
PEER logo Pacific Earthquake Engineering Research Center home about peer news events research Engineering Resources Site Map Search Important Earthquake Engineering Resources - American Concrete Institute Motion Observation Systems (COSMOS) - Consortium of Universities for Research in Earthquake Engineering
NASA Astrophysics Data System (ADS)
Tanaka, Y.; Hirayama, Y.; Kuroda, S.; Yoshida, M.
2015-12-01
People without severe disaster experience infallibly forget even the extraordinary one like 3.11 as time advances. Therefore, to improve the resilient society, an ingenious attempt to keep people's memory of disaster not to fade away is necessary. Since 2011, we have been caring out earthquake disaster drills for residents of high-rise apartments, for schoolchildren, for citizens of the coastal area, etc. Using a portable earthquake simulator (1), the drill consists of three parts, the first: a short lecture explaining characteristic quakes expected for Japanese people to have in the future, the second: reliving experience of major earthquakes hit Japan since 1995, and the third: a short lecture for preparation that can be done at home and/or in an office. For the quake experience, although it is two dimensional movement, the real earthquake observation record is used to control the simulator to provide people to relive an experience of different kinds of earthquake including the long period motion of skyscrapers. Feedback of the drill is always positive because participants understand that the reliving the quake experience with proper lectures is one of the best method to communicate the past disasters to their family and to inherit them to the next generation. There are several kinds of archive for disaster as inheritance such as pictures, movies, documents, interviews, and so on. In addition to them, here we propose to construct 'the archive of the quake experience' which compiles observed data ready to relive with the simulator. We would like to show some movies of our quake drill in the presentation. Reference: (1) Kuroda, S. et al. (2012), "Development of portable earthquake simulator for enlightenment of disaster preparedness", 15th World Conference on Earthquake Engineering 2012, Vol. 12, 9412-9420.
NASA Astrophysics Data System (ADS)
Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.
2016-12-01
Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.
Toward Exascale Earthquake Ground Motion Simulations for Near-Fault Engineering Analysis
Johansen, Hans; Rodgers, Arthur; Petersson, N. Anders; ...
2017-09-01
Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative tomore » a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. As a result, investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.« less
Toward Exascale Earthquake Ground Motion Simulations for Near-Fault Engineering Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johansen, Hans; Rodgers, Arthur; Petersson, N. Anders
Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative tomore » a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. As a result, investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.« less
Open System for Earthquake Engineering Simulation - Home Page
-X, an expert system for reliable pre-and post-processing of buildings is now available for free /post processor GiD. The interface is available though the the GID+OpenSees website OpenSees Days Europe
10 CFR Appendix S to Part 50 - Earthquake Engineering Criteria for Nuclear Power Plants
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Earthquake Engineering Criteria for Nuclear Power Plants S... FACILITIES Pt. 50, App. S Appendix S to Part 50—Earthquake Engineering Criteria for Nuclear Power Plants... applicant or holder whose construction permit was issued before January 10, 1997, the earthquake engineering...
10 CFR Appendix S to Part 50 - Earthquake Engineering Criteria for Nuclear Power Plants
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Earthquake Engineering Criteria for Nuclear Power Plants S... FACILITIES Pt. 50, App. S Appendix S to Part 50—Earthquake Engineering Criteria for Nuclear Power Plants... applicant or holder whose construction permit was issued before January 10, 1997, the earthquake engineering...
10 CFR Appendix S to Part 50 - Earthquake Engineering Criteria for Nuclear Power Plants
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Earthquake Engineering Criteria for Nuclear Power Plants S... FACILITIES Pt. 50, App. S Appendix S to Part 50—Earthquake Engineering Criteria for Nuclear Power Plants... applicant or holder whose construction permit was issued before January 10, 1997, the earthquake engineering...
10 CFR Appendix S to Part 50 - Earthquake Engineering Criteria for Nuclear Power Plants
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Earthquake Engineering Criteria for Nuclear Power Plants S... FACILITIES Pt. 50, App. S Appendix S to Part 50—Earthquake Engineering Criteria for Nuclear Power Plants... applicant or holder whose construction permit was issued before January 10, 1997, the earthquake engineering...
MCEER, from Earthquake Engineering to Extreme Events | Home Page
Center Report Series Education Education Home Bridge Engineering Guest Speaker Series Connecte²d Teaching CSEE Graduate Student Poster Competition Earthquake Engineering Education in Haiti Earthquakes : FAQ's Engineering Seminar Series K-12 STEM Education National Engineers Week Research Experiences for
NASA Astrophysics Data System (ADS)
Heidari, Reza
2016-04-01
In this study, the 11 August 2012 M w 6.4 Ahar earthquake is investigated using the ground motion simulation based on the stochastic finite-fault model. The earthquake occurred in northwestern Iran and causing extensive damage in the city of Ahar and surrounding areas. A network consisting of 58 acceleration stations recorded the earthquake within 8-217 km of the epicenter. Strong ground motion records from six significant well-recorded stations close to the epicenter have been simulated. These stations are installed in areas which experienced significant structural damage and humanity loss during the earthquake. The simulation is carried out using the dynamic corner frequency model of rupture propagation by extended fault simulation program (EXSIM). For this purpose, the propagation features of shear-wave including {Q}_s value, kappa value {k}_0 , and soil amplification coefficients at each site are required. The kappa values are obtained from the slope of smoothed amplitude of Fourier spectra of acceleration at higher frequencies. The determined kappa values for vertical and horizontal components are 0.02 and 0.05 s, respectively. Furthermore, an anelastic attenuation parameter is derived from energy decay of a seismic wave by using continuous wavelet transform (CWT) for each station. The average frequency-dependent relation estimated for the region is Q=(122± 38){f}^{(1.40± 0.16)}. Moreover, the horizontal to vertical spectral ratio H/V is applied to estimate the site effects at stations. Spectral analysis of the data indicates that the best match between the observed and simulated spectra occurs for an average stress drop of 70 bars. Finally, the simulated and observed results are compared with pseudo acceleration spectra and peak ground motions. The comparison of time series spectra shows good agreement between the observed and the simulated waveforms at frequencies of engineering interest.
Boore, David M.
2000-01-01
A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.
Skip to content HOME NEWS USERS OpenFrescoExpress OpenFresco Examples & Tools Feedback staff and research students learning about hybrid simulation and starting to use this experimental the Pacific Earthquake Engineering Research Center (PEER) and others. Search Search for: Search Menu
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
Simulation of ground motion using the stochastic method
Boore, D.M.
2003-01-01
A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.
NASA Astrophysics Data System (ADS)
Baytiyeh, Hoda; Naja, Mohamad K.
2014-09-01
Due to the high market demands for professional engineers in the Arab oil-producing countries, the appetite of Middle Eastern students for high-paying jobs and challenging careers in engineering has sharply increased. As a result, engineering programmes are providing opportunities for more students to enrol on engineering courses through lenient admission policies that do not compromise academic standards. This strategy has generated an influx of students who must be carefully educated to enhance their professional knowledge and social capital to assist in future earthquake-disaster risk-reduction efforts. However, the majority of Middle Eastern engineering students are unaware of the valuable acquired engineering skills and knowledge in building the resilience of their communities to earthquake disasters. As the majority of the countries in the Middle East are exposed to seismic hazards and are vulnerable to destructive earthquakes, engineers have become indispensable assets and the first line of defence against earthquake threats. This article highlights the contributions of some of the engineering innovations in advancing technologies and techniques for effective disaster mitigation and it calls for the incorporation of earthquake-disaster-mitigation education into academic engineering programmes in the Eastern Mediterranean region.
Site correction of stochastic simulation in southwestern Taiwan
NASA Astrophysics Data System (ADS)
Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan
2014-05-01
Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.
Nonlinear Site Response Validation Studies Using KIK-net Strong Motion Data
NASA Astrophysics Data System (ADS)
Asimaki, D.; Shi, J.
2014-12-01
Earthquake simulations are nowadays producing realistic ground motion time-series in the range of engineering design applications. Of particular significance to engineers are simulations of near-field motions and large magnitude events, for which observations are scarce. With the engineering community slowly adopting the use of simulated ground motions, site response models need to be re-evaluated in terms of their capabilities and limitations to 'translate' the simulated time-series from rock surface output to structural analyses input. In this talk, we evaluate three one-dimensional site response models: linear viscoelastic, equivalent linear and nonlinear. We evaluate the performance of the models by comparing predictions to observations at 30 downhole stations of the Japanese network KIK-Net that have recorded several strong events, including the 2011 Tohoku earthquake. Velocity profiles are used as the only input to all models, while additional parameters such as quality factor, density and nonlinear dynamic soil properties are estimated from empirical correlations. We quantify the differences of ground surface predictions and observations in terms of both seismological and engineering intensity measures, including bias ratios of peak ground response and visual comparisons of elastic spectra, and inelastic to elastic deformation ratio for multiple ductility ratios. We observe that PGV/Vs,30 — as measure of strain— is a better predictor of site nonlinearity than PGA, and that incremental nonlinear analyses are necessary to produce reliable estimates of high-frequency ground motion components at soft sites. We finally discuss the implications of our findings on the parameterization of nonlinear amplification factors in GMPEs, and on the extensive use of equivalent linear analyses in probabilistic seismic hazard procedures.
PEER logo Pacific Earthquake Engineering Research Center home about peer news events research Site Map Search Frequently Asked Questions What is the Pacific Earthquake Engineering Research Center ? The Pacific Earthquake Engineering Research Center (PEER) is a multidisciplinary research and
Rapid earthquake hazard and loss assessment for Euro-Mediterranean region
NASA Astrophysics Data System (ADS)
Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru
2010-10-01
The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.
NASA Astrophysics Data System (ADS)
Kaneda, Y.; Takahashi, N.; Hori, T.; Kawaguchi, K.; Isouchi, C.; Fujisawa, K.
2017-12-01
Destructive natural disasters such as earthquakes and tsunamis have occurred frequently in the world. For instance, 2004 Sumatra Earthquake in Indonesia, 2008 Wenchuan Earthquake in China, 2010 Chile Earthquake and 2011 Tohoku Earthquake in Japan etc., these earthquakes generated very severe damages. For the reduction and mitigation of damages by destructive natural disasters, early detection of natural disasters and speedy and proper evacuations are indispensable. And hardware and software developments/preparations for reduction and mitigation of natural disasters are quite important. In Japan, DONET as the real time monitoring system on the ocean floor is developed and deployed around the Nankai trough seismogenic zone southwestern Japan. So, the early detection of earthquakes and tsunamis around the Nankai trough seismogenic zone will be expected by DONET. The integration of the real time data and advanced simulation researches will lead to reduce damages, however, in the resilience society, the resilience methods will be required after disasters. Actually, methods on restorations and revivals are necessary after natural disasters. We would like to propose natural disaster mitigation science for early detections, evacuations and restorations against destructive natural disasters. This means the resilience society. In natural disaster mitigation science, there are lots of research fields such as natural science, engineering, medical treatment, social science and literature/art etc. Especially, natural science, engineering and medical treatment are fundamental research fields for natural disaster mitigation, but social sciences such as sociology, geography and psychology etc. are very important research fields for restorations after natural disasters. Finally, to realize and progress disaster mitigation science, human resource cultivation is indispensable. We already carried out disaster mitigation science under `new disaster mitigation research project on Mega thrust earthquakes around Nankai/Ryukyu subduction zone', and `SATREPS project of earthquake and tsunami disaster mitigation in the Marmara region and disaster education in Turkey'. Furthermore, we have to progress the natural disaster mitigation science against destructive natural disaster in the near future.
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
The quest for better quality-of-life - learning from large-scale shaking table tests
NASA Astrophysics Data System (ADS)
Nakashima, M.; Sato, E.; Nagae, T.; Kunio, F.; Takahito, I.
2010-12-01
Earthquake engineering has its origins in the practice of “learning from actual earthquakes and earthquake damages.” That is, we recognize serious problems by witnessing the actual damage to our structures, and then we develop and apply engineering solutions to solve these problems. This tradition in earthquake engineering, i.e., “learning from actual damage,” was an obvious engineering response to earthquakes and arose naturally as a practice in a civil and building engineering discipline that traditionally places more emphasis on experience than do other engineering disciplines. But with the rapid progress of urbanization, as society becomes denser, and as the many components that form our society interact with increasing complexity, the potential damage with which earthquakes threaten the society also increases. In such an era, the approach of ”learning from actual earthquake damages” becomes unacceptably dangerous and expensive. Among the practical alternatives to the old practice is to “learn from quasi-actual earthquake damages.” One tool for experiencing earthquake damages without attendant catastrophe is the large shaking table. E-Defense, the largest one we have, was developed in Japan after the 1995 Hyogoken-Nanbu (Kobe) earthquake. Since its inauguration in 2005, E-Defense has conducted over forty full-scale or large-scale shaking table tests, applied to a variety of structural systems. The tests supply detailed data on actual behavior and collapse of the tested structures, offering the earthquake engineering community opportunities to experience and assess the actual seismic performance of the structures, and to help society prepare for earthquakes. Notably, the data were obtained without having to wait for the aftermaths of actual earthquakes. Earthquake engineering has always been about life safety, but in recent years maintaining the quality of life has also become a critical issue. Quality-of-life concerns include nonstructural damage, business continuity, public health, quickness of damage assessment, infrastructure, data and communication networks, and other issues, and not enough useful empirical data have emerged about these issues from the experiences of actual earthquakes. To provide quantitative data that can be used to reduce earthquake risk to our quality of life, E-Defense recently has been implementing two comprehensive research projects in which a base-isolated hospital and a steel high-rise building were tested using the E-Defense shaking table and their seismic performance were examined particularly in terms of the nonstructural damage, damage to building contents and furniture, and operability, functionality, and business-continuity capability. The paper presents the overview of the two projects, together with major findings obtained from the projects.
NASA Astrophysics Data System (ADS)
Liu, Bo-Yan; Shi, Bao-Ping; Zhang, Jian
2007-05-01
In this study, a composite source model has been used to calculate the realistic strong ground motions in Beijing area, caused by 1679 M S8.0 earthquake in Sanhe-Pinggu. The results could provide us the useful physical parameters for the future seismic hazard analysis in this area. Considering the regional geological/geophysical background, we simulated the scenario earthquake with an associated ground motions in the area ranging from 39.3°N to 41.1°N in latitude and from 115.35°E to 117.55°E in longitude. Some of the key factors which could influence the characteristics of strong ground motion have been discussed, and the resultant peak ground acceleration (PGA) distribution and the peak ground velocity (PGV) distribution around Beijing area also have been made as well. A comparison of the simulated result with the results derived from the attenuation relation has been made, and a sufficient discussion about the advantages and disadvantages of composite source model also has been given in this study. The numerical results, such as the PGA, PGV, peak ground displacement (PGD), and the three-component time-histories developed for Beijing area, have a potential application in earthquake engineering field and building code design, especially for the evaluation of critical constructions, government decision making and the seismic hazard assessment by financial/insurance companies.
2008-01-01
earthquake prediction in the New Madrid fault area near the Olmsted project.32 LOCAL FLOOD REDUCTION TESTS While...113 Neichter, Patrick, 91 New Albany, IN, 127, 145 Newark Air Force Base, 32, 43 , 47 Newburgh Locks and Dam, xi, 91 New Madrid earthquake , 20... Madrid earthquake , the greatest earthquake in American history, more powerful than the 1906 San Francisco earthquake . The 1811 earthquake had caused
NASA Astrophysics Data System (ADS)
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
Spatial Evaluation and Verification of Earthquake Simulators
NASA Astrophysics Data System (ADS)
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
ERIC Educational Resources Information Center
English, Lyn D.; King, Donna; Smeed, Joanna
2017-01-01
As part of a 3-year longitudinal study, 136 sixth-grade students completed an engineering-based problem on earthquakes involving integrated STEM learning. Students employed engineering design processes and STEM disciplinary knowledge to plan, sketch, then construct a building designed to withstand earthquake damage, taking into account a number of…
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Introduction: seismology and earthquake engineering in Mexico and Central and South America.
Espinosa, A.F.
1982-01-01
The results from seismological studies that are used by the engineering community are just one of the benefits obtained from research aimed at mitigating the earthquake hazard. In this issue of Earthquake Information Bulletin current programs in seismology and earthquake engineering, seismic networks, future plans and some of the cooperative programs with different internation organizations are described by Latin-American seismologists. The article describes the development of seismology in Latin America and the seismological interest of the OAS. -P.N.Chroston
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saragoni, G. Rodolfo
The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8more » earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand.« less
Post-Earthquake Assessment of Nevada Bridges Using ShakeMap/ShakeCast
DOT National Transportation Integrated Search
2016-01-01
Post-earthquake capacity of Nevada highway bridges is examined through a combination of engineering study and scenario earthquake evaluation. The study was undertaken by the University of Nevada Reno Department of Civil and Environmental Engineering ...
Validation of Broadband Ground Motion Simulations for Japanese Crustal Earthquakes by the Recipe
NASA Astrophysics Data System (ADS)
Iwaki, A.; Maeda, T.; Morikawa, N.; Miyake, H.; Fujiwara, H.
2015-12-01
The Headquarters for Earthquake Research Promotion (HERP) of Japan has organized the broadband ground motion simulation method into a standard procedure called the "recipe" (HERP, 2009). In the recipe, the source rupture is represented by the characterized source model (Irikura and Miyake, 2011). The broadband ground motion time histories are computed by a hybrid approach: the 3-D finite-difference method (Aoi et al. 2004) and the stochastic Green's function method (Dan and Sato, 1998; Dan et al. 2000) for the long- (> 1 s) and short-period (< 1 s) components, respectively, using the 3-D velocity structure model. As the engineering significance of scenario earthquake ground motion prediction is increasing, thorough verification and validation are required for the simulation methods. This study presents the self-validation of the recipe for two MW6.6 crustal events in Japan, the 2000 Tottori and 2004 Chuetsu (Niigata) earthquakes. We first compare the simulated velocity time series with the observation. Main features of the velocity waveforms, such as the near-fault pulses and the large later phases on deep sediment sites are well reproduced by the simulations. Then we evaluate 5% damped pseudo acceleration spectra (PSA) in the framework of the SCEC Broadband Platform (BBP) validation (Dreger et al. 2015). The validation results are generally acceptable in the period range 0.1 - 10 s, whereas those in the shortest period range (0.01-0.1 s) are less satisfactory. We also evaluate the simulations with the 1-D velocity structure models used in the SCEC BBP validation exercise. Although the goodness-of-fit parameters for PSA do not significantly differ from those for the 3-D velocity structure model, noticeable differences in velocity waveforms are observed. Our results suggest the importance of 1) well-constrained 3-D velocity structure model for broadband ground motion simulations and 2) evaluation of time series of ground motion as well as response spectra.
NASA Astrophysics Data System (ADS)
Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.
2006-12-01
Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.
Pollitz, Fred
2012-01-01
Synthetic seismicity simulations have been explored by the Southern California Earthquake Center (SCEC) Earthquake Simulators Group in order to guide long‐term forecasting efforts related to the Unified California Earthquake Rupture Forecast (Tullis et al., 2012a). In this study I describe the viscoelastic earthquake simulator (ViscoSim) of Pollitz, 2009. Recapitulating to a large extent material previously presented by Pollitz (2009, 2011) I describe its implementation of synthetic ruptures and how it differs from other simulators being used by the group.
Stochastic ground motion simulation
Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan
2014-01-01
Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.
Effect of water content on stability of landslides triggered by earthquakes
NASA Astrophysics Data System (ADS)
Beyabanaki, S.; Bagtzoglou, A. C.; Anagnostou, E. N.
2013-12-01
Earthquake- triggered landslides are one of the most important natural hazards that often result in serious structural damage and loss of life. They are widely studied by several researchers. However, less attention has been focused on soil water content. Although the effect of water content has been widely studied for rainfall- triggered landslides [1], much less attention has been given to it for stability analysis of earthquake- triggered landslides. We developed a combined hydrology and stability model to investigate effect of soil water content on earthquake-triggered landslides. For this purpose, Bishop's method is used to do the slope stability analysis and Richard's equation is employed to model infiltration. Bishop's method is one the most widely methods used for analyzing stability of slopes [2]. Earthquake acceleration coefficient (EAC) is also considered in the model to analyze the effect of earthquake on slope stability. Also, this model is able to automatically determine geometry of the potential landslide. In this study, slopes with different initial water contents are simulated. First, the simulation is performed in the case of earthquake only with different EACs and water contents. As shown in Fig. 1, initial water content has a significant effect on factor of safety (FS). Greater initial water contents lead to less FS. This impact is more significant when EAC is small. Also, when initial water content is high, landslides can happen even with small earthquake accelerations. Moreover, in this study, effect of water content on geometry of landslides is investigated. For this purpose, different cases of landslides triggered by earthquakes only and both rainfall and earthquake for different initial water contents are simulated. The results show that water content has more significant effect on geometry of landslides triggered by rainfall than those triggered by an earthquake. Finally, effect of water content on landslides triggered by earthquakes during rainfall is investigated. In this study, after different durations of rainfall, an earthquake is applied to the model and the elapsed time in which the FS gets less than one obtains by trial and error. The results for different initial water contents and earthquake acceleration coefficients show that landslides can happen after shorter rainfall duration when water content is greater. If water content is high enough, the landslide occurs even without rainfall. References [1] Ray RL, Jacobs JM, de Alba P. Impact of unsaturated zone soil moisture and groundwater table on slope instability. J. Geotech. Geoenviron. Eng., 2010, 136(10):1448-1458. [2] Das B. Principles of Foundation Engineering. Stanford, Cengage Learning, 2011. Fig. 1. Effect of initial water content on FS for different EACs
NASA Astrophysics Data System (ADS)
Mualchin, Lalliana
2011-03-01
Modern earthquake ground motion hazard mapping in California began following the 1971 San Fernando earthquake in the Los Angeles metropolitan area of southern California. Earthquake hazard assessment followed a traditional approach, later called Deterministic Seismic Hazard Analysis (DSHA) in order to distinguish it from the newer Probabilistic Seismic Hazard Analysis (PSHA). In DSHA, seismic hazard in the event of the Maximum Credible Earthquake (MCE) magnitude from each of the known seismogenic faults within and near the state are assessed. The likely occurrence of the MCE has been assumed qualitatively by using late Quaternary and younger faults that are presumed to be seismogenic, but not when or within what time intervals MCE may occur. MCE is the largest or upper-bound potential earthquake in moment magnitude, and it supersedes and automatically considers all other possible earthquakes on that fault. That moment magnitude is used for estimating ground motions by applying it to empirical attenuation relationships, and for calculating ground motions as in neo-DSHA (Z uccolo et al., 2008). The first deterministic California earthquake hazard map was published in 1974 by the California Division of Mines and Geology (CDMG) which has been called the California Geological Survey (CGS) since 2002, using the best available fault information and ground motion attenuation relationships at that time. The California Department of Transportation (Caltrans) later assumed responsibility for printing the refined and updated peak acceleration contour maps which were heavily utilized by geologists, seismologists, and engineers for many years. Some engineers involved in the siting process of large important projects, for example, dams and nuclear power plants, continued to challenge the map(s). The second edition map was completed in 1985 incorporating more faults, improving MCE's estimation method, and using new ground motion attenuation relationships from the latest published results at that time. CDMG eventually published the second edition map in 1992 following the Governor's Board of Inquiry on the 1989 Loma Prieta earthquake and at the demand of Caltrans. The third edition map was published by Caltrans in 1996 utilizing GIS technology to manage data that includes a simplified three-dimension geometry of faults and to facilitate efficient corrections and revisions of data and the map. The spatial relationship of fault hazards with highways, bridges or any other attribute can be efficiently managed and analyzed now in GIS at Caltrans. There has been great confidence in using DSHA in bridge engineering and other applications in California, and it can be confidently applied in any other earthquake-prone region. Earthquake hazards defined by DSHA are: (1) transparent and stable with robust MCE moment magnitudes; (2) flexible in their application to design considerations; (3) can easily incorporate advances in ground motion simulations; and (4) economical. DSHA and neo-DSHA have the same approach and applicability. The accuracy of DSHA has proven to be quite reasonable for practical applications within engineering design and always done with professional judgment. In the final analysis, DSHA is a reality-check for public safety and PSHA results. Although PSHA has been acclaimed as a better approach for seismic hazard assessment, it is DSHA, not PSHA, that has actually been used in seismic hazard assessment for building and bridge engineering, particularly in California.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
...: Survey of Principal Investigators on Earthquake Engineering Research Awards Made by the National Science... survey of Principal Investigators on NSF earthquake engineering research awards, including but not... NATIONAL SCIENCE FOUNDATION Submission for OMB Review; Comment Request Survey of Principal...
Cohen, Rebecca; Weinisch, Kevin
2015-01-01
United States regulations require nuclear power plants (NPPs) to estimate the time needed to evacuate the emergency planning zone (EPZ, a circle with an approximate 10-mile radius centered at the NPP). These evacuation time estimate (ETE) studies are to be used by emergency personnel in the event of a radiological emergency. ETE studies are typically done using traffic simulation and evacuation models, based on traffic engineering algorithms that reflect congestion and delay. ETE studies are typically conducted assuming all evacuation routes are traversable. As witnessed in the Great East Japan Earthquake in March 2011, an earthquake and the ensuing tsunami can cause an incident at a NPP that requires an evacuation of the public. The earthquake and tsunami can also damage many of the available bridges and roadways and, therefore, impede evacuation and put people at risk of radiation exposure. This article presents a procedure, using traffic simulation and evacuation models, to estimate the impact on ETE due to bridge and roadway damage caused by a major earthquake, or similar hazardous event. The results of this analysis are used by emergency personnel to make protective action decisions that will minimize the exposure of radiation to the public. Additionally, the results allow emergency planners to ensure proper equipment and personnel are available for these types of events. Emergency plans are revised to ensure prompt response and recovery action during critical times.
Seismic design and engineering research at the U.S. Geological Survey
1988-01-01
The Engineering Seismology Element of the USGS Earthquake Hazards Reduction Program is responsible for the coordination and operation of the National Strong Motion Network to collect, process, and disseminate earthquake strong-motion data; and, the development of improved methodologies to estimate and predict earthquake ground motion. Instrumental observations of strong ground shaking induced by damaging earthquakes and the corresponding response of man-made structures provide the basis for estimating the severity of shaking from future earthquakes, for earthquake-resistant design, and for understanding the physics of seismologic failure in the Earth's crust.
NASA Astrophysics Data System (ADS)
Bostenaru Dan, M.
2009-04-01
This special issue includes selected papers on the topic of earthquake impact from the sessions held in 2004 in Nice, France and in 2005 in Vienna, Austria at the first and respectivelly the second European Geosciences Union General Assembly. Since its start in 1999, in the Hague, Netherlands, the hazard of earthquakes has been the most popular of the session. The respective calls in 2004 was for: Nature's forces including earthquakes, floods, landslides, high winds and volcanic eruptions can inflict losses to urban settlements and man-made structures such as infrastructure. In Europe, recent years have seen such significant losses from earthquakes in south and south-eastern Europe, floods in central Europe, and wind storms in western Europe. Meanwhile, significant progress has been made in understanding disasters. Several scientific fields contribute to a holistic approach in the evaluation of capacities, vulnerabilities and hazards, the main factors on mitigating urban disasters due to natural hazards. An important part of the session is devoted to assessment of earthquake shaking and loss scenarios, including both physical damage and human causalities. Early warning and rapid damage evaluation are of utmost importance for addressing the safety of many essential facilities, for emergency management of events and for disaster response. In case of earthquake occurrence strong motion networks, data processing and interpretation lead to preliminary estimation (scenarios) of geographical distribution of damages. Factual information on inflicted damage, like those obtained from shaking maps or aerial imagery permit a confrontation with simulation maps of damage in order to define a more accurate picture of the overall losses. Most recent developments towards quantitative and qualitative simulation of natural hazard impacts on urban areas, which provide decision-making support for urban disaster management, and success stories of and lessons learned from disaster mitigation will be presented. The session includes contributions showing methodological and modelling approaches from scientists in geophysical/seismological, hydrological, remote sensing, civil engineering, insurance, and urbanism, amongst other fields, as well as presentations from practitioners working on specific case studies, regarding analysis of recent events and their impact on cities as well as re-evaluation of past events from the point of view of long-time recovery. In 2005 it was called for: Most strategies for both preparedness and emergency management in case of disaster mitigation are related to urban planning. While natural, engineering and social sciences contribute to the evaluation of the impact of earthquakes and their secondary events (including tsunamis, earthquake triggered landslides, or fire), floods, landslides, high winds, and volcanic eruptions on urban areas, there are the instruments of urban planning which are to be employed for both visualisation as well as development and implementation of strategy concepts for pre- and postdisaster intervention. The evolution of natural systems towards extreme conditions is taken into consideration so far at it concerns the damaging impact on urban areas and infrastructure and the impact on the natural environment of interventions to reduce such damaging impact.
77 FR 64314 - Advisory Committee on Earthquake Hazards Reduction Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-19
... is to discuss engineering needs for existing buildings, to review the National Earthquake Hazards... Committee business. The final agenda will be posted on the NEHRP Web site at http://nehrp.gov/ . DATES: The... assesses: Trends and developments in the science and engineering of earthquake hazards reduction; The...
The Lice, Turkey, earthquake of September 6, 1975; a preliminary engineering investigation
Yanev, P. I.
1976-01-01
The Fifth European Conference on Earthquake Engineering was held on September 22 through 25 in Istanbul, Turkey. The opening speech by the Honorable H. E. Nurettin Ok, Minister of Reconstruction and Resettlement of Turkey, introduced the several hundred delegates to the realities of earthquake hazards in Turkey:
NASA Astrophysics Data System (ADS)
Yang, Jian; Sun, Shuaishuai; Tian, Tongfei; Li, Weihua; Du, Haiping; Alici, Gursel; Nakano, Masami
2016-03-01
Protecting civil engineering structures from uncontrollable events such as earthquakes while maintaining their structural integrity and serviceability is very important; this paper describes the performance of a stiffness softening magnetorheological elastomer (MRE) isolator in a scaled three storey building. In order to construct a closed-loop system, a scaled three storey building was designed and built according to the scaling laws, and then four MRE isolator prototypes were fabricated and utilised to isolate the building from the motion induced by a scaled El Centro earthquake. Fuzzy logic was used to output the current signals to the isolators, based on the real-time responses of the building floors, and then a simulation was used to evaluate the feasibility of this closed loop control system before carrying out an experimental test. The simulation and experimental results showed that the stiffness softening MRE isolator controlled by fuzzy logic could suppress structural vibration well.
NASA Astrophysics Data System (ADS)
Haase, J. S.; Bock, Y.; Saunders, J. K.; Goldberg, D.; Restrepo, J. I.
2016-12-01
As part of an effort to promote the use of NASA-sponsored Earth science information for disaster risk reduction, real-time high-rate seismogeodetic data are being incorporated into early warning and structural monitoring systems. Seismogeodesy combines seismic acceleration and GPS displacement measurements using a tightly-coupled Kalman filter to provide absolute estimates of seismic acceleration, velocity and displacement. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. Real-time seismogeodetic observations at subduction zones allow for more robust and rapid magnitude and slip estimation that increase warning time in the near-source region. A NASA-funded effort to utilize GPS and seismogeodesy in NOAA's Tsunami Warning Centers in Alaska and Hawaii integrates new modules for picking, locating, and estimating magnitudes and moment tensors for earthquakes into the USGS earthworm environment at the TWCs. In a related project, NASA supports the transition of this research to seismogeodetic tools for disaster preparedness, specifically by implementing GPS and low-cost MEMS accelerometers for structural monitoring in partnership with earthquake engineers. Real-time high-rate seismogeodetic structural monitoring has been implemented on two structures. The first is a parking garage at the Autonomous University of Baja California Faculty of Medicine in Mexicali, not far from the rupture of the 2011 Mw 7.2 El Mayor Cucapah earthquake enabled through a UCMexus collaboration. The second is the 8-story Geisel Library at University of California, San Diego (UCSD). The system has also been installed for several proof-of-concept experiments at the UCSD Network for Earthquake Engineering Simulation (NEES) Large High Performance Outdoor Shake Table. We present MEMS-based seismogeodetic observations from the 10 June 2016 Mw 5.2 Borrego Springs earthquake of strong ground motions in near field close to the San Jacinto fault, as well as observations that show the response of the 3 story parking garage. The occurrence of this recent earthquake provided a useful demonstration of structural monitoring applications with seismogeodesy.
NASA Astrophysics Data System (ADS)
Haddad, David Elias
Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the interpretation of paleoseismically derived slip rates that are used to form earthquake forecasts. The final part of my research evaluates a set of Earth science-themed lesson plans that I designed for elementary-level learning-disabled students. My findings show that a combination of concept delivery techniques is most effective for learning-disabled students and should incorporate interactive slide presentations, tactile manipulatives, teacher-assisted concept sketches, and student-led teaching to help learning-disabled students grasp Earth science concepts.
NASA Astrophysics Data System (ADS)
Dandoulaki, M.; Kourou, A.; Panoutsopoulou, M.
2009-04-01
It is vastly accepted that earthquake education is the way to earthquake protection. Nonetheless experience demonstrates that knowing what to do does not necessarily result in a better behaviour in case of a real earthquake. A research project titled: "Seismopolis" - "Pilot integrated System for Public Familiarization with Earthquakes and Information on Earthquake Protection" aimed at the improvement of the behaviour of people through an appropriate amalgamation of knowledge transfer and virtually experiencing an earthquake situation. Seismopolis combines well established education means such as books and leaflets with new technologies like earthquake simulation and virtual reality. It comprises a series of 5 main spaces that the visitor passes one-by-one. Space 1. Reception and introductory information. Visitors are given fundamental information on earthquakes and earthquake protection, as well as on the appropriate behaviour in case of an earthquake. Space 2. Earthquake simulation room Visitors experience an earthquake in a room. A typical kitchen is set on a shake table area (3m x 6m planar triaxial shake table) and is shaken in both horizontal and vertical directions by introducing seismographs of real or virtual earthquakes. Space 3. Virtual reality room Visitors may have the opportunity to virtually move around in the building or in the city after an earthquake disaster and take action as in a real-life situation, wearing stereoscopic glasses and using navigation tools. Space 4. Information and resources library Visitors are offered the opportunity to know more about earthquake protection. A series of means are available for this, some developed especially for Seismopolis (3 books, 2 Cds, a website and an interactive table game). Space 5. De-briefing area Visitors may be subjected to a pedagogical and psychological evaluation at the end of their visit and offered support if needed. For the evaluation of the "Seismopolis" Centre, a pilot application of the complete complex took place with the participation of different groups (schoolchildren, university students, adults, elderly persons, emigrants and persons with special needs). This test period recorded positive impression and reaction from the visitors and indicated the pedagogical and psychological appropriateness of the system. Seismopolis is the outcome of collaboration of public, academic and private partners and of a range of disciplines, namely seismologists, geologists, structural engineers, geographers, sociologists and psycologists. It is actually hosted by the Municipality of Rendis in Athens. More information on Seismopolis can be found in www.seismopolis.org .
U.S. Geological Survey National Strong-Motion Project strategic plan, 2017–22
Aagaard, Brad T.; Celebi, Mehmet; Gee, Lind; Graves, Robert; Jaiswal, Kishor; Kalkan, Erol; Knudsen, Keith L.; Luco, Nicolas; Smith, James; Steidl, Jamison; Stephens, Christopher D.
2017-12-11
The mission of the National Strong-Motion Project is to provide measurements of how the ground and built environment behave during earthquake shaking to the earthquake engineering community, the scientific community, emergency managers, public agencies, industry, media, and other users for the following purposes: Improving engineering evaluations and design methods for facilities and systems;Providing timely information for earthquake early warning, damage assessment, and emergency response action; andContributing to a greater understanding of the mechanics of earthquake rupture, groundmotion characteristics, and earthquake effects.
Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations
Frankel, Arthur; Stephenson, William; Carver, David
2009-01-01
Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.
Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA
NASA Astrophysics Data System (ADS)
Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.
2017-12-01
Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA
Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou
2009-01-01
This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
Publications - RI 2015-5 | Alaska Division of Geological & Geophysical
data 7.5 M Metadata - Read me Keywords Active Fault; Akutan; Coastal; Dutch Harbor; Earthquake ; Earthquake Related Slope Failure; Emergency Preparedness; Engineering; Engineering Geology; Fault
Living on an Active Earth: Perspectives on Earthquake Science
NASA Astrophysics Data System (ADS)
Lay, Thorne
2004-02-01
The annualized long-term loss due to earthquakes in the United States is now estimated at $4.4 billion per year. A repeat of the 1923 Kanto earthquake, near Tokyo, could cause direct losses of $2-3 trillion. With such grim numbers, which are guaranteed to make you take its work seriously, the NRC Committee on the Science of Earthquakes begins its overview of the emerging multidisciplinary field of earthquake science. An up-to-date and forward-looking survey of scientific investigation of earthquake phenomena and engineering response to associated hazards is presented at a suitable level for a general educated audience. Perspectives from the fields of seismology, geodesy, neo-tectonics, paleo-seismology, rock mechanics, earthquake engineering, and computer modeling of complex dynamic systems are integrated into a balanced definition of earthquake science that has never before been adequately articulated.
Engineering models for catastrophe risk and their application to insurance
NASA Astrophysics Data System (ADS)
Dong, Weimin
2002-06-01
Internationally earthquake insurance, like all other insurance (fire, auto), adopted actuarial approach in the past, which is, based on historical loss experience to determine insurance rate. Due to the fact that earthquake is a rare event with severe consequence, irrational determination of premium rate and lack of understanding scale of potential loss led to many insurance companies insolvent after Northridge earthquake in 1994. Along with recent advances in earth science, computer science and engineering, computerized loss estimation methodologies based on first principles have been developed to the point that losses from destructive earthquakes can be quantified with reasonable accuracy using scientific modeling techniques. This paper intends to introduce how engineering models can assist to quantify earthquake risk and how insurance industry can use this information to manage their risk in the United States and abroad.
NASA Astrophysics Data System (ADS)
2011-12-01
Jacobo Bielak, university professor of civil and environmental engineering at Carnegie Mellon University, in Pittsburgh, Pa., has been recognized as a distinguished member of the American Society of Civil Engineers, the highest recognition the organization confers. Bielak was noted as “an internationally-known researcher in the area of structural responses to earthquakes, developing sophisticated numerical simulations to pinpoint earthquake effects.” Alan Strahler, professor of geography and environment at Boston University, Boston, Mass., received a 2011 William T. Pecora Award for his achievements in Earth remote sensing. The award, presented by NASA and the U.S. Department of the Interior on 15 November, recognized Strahler for “his contributions to remote-sensing science, leadership and education, which have improved the fundamental understanding of the remote-sensing process and its applications for observing land surface properties.” The Pecora award is named for the former director of the U.S. Geological Survey and undersecretary of the Interior department, who was influential in the establishment of the Landsat satellite program.
Search and rescue in collapsed structures: engineering and social science aspects.
El-Tawil, Sherif; Aguirre, Benigno
2010-10-01
This paper discusses the social science and engineering dimensions of search and rescue (SAR) in collapsed buildings. First, existing information is presented on factors that influence the behaviour of trapped victims, particularly human, physical, socioeconomic and circumstantial factors. Trapped victims are most often discussed in the context of structural collapse and injuries sustained. Most studies in this area focus on earthquakes as the type of disaster that produces the most extensive structural damage. Second, information is set out on the engineering aspects of urban search and rescue (USAR) in the United States, including the role of structural engineers in USAR operations, training and certification of structural specialists, and safety and general procedures. The use of computational simulation to link the engineering and social science aspects of USAR is discussed. This could supplement training of local SAR groups and USAR teams, allowing them to understand better the collapse process and how voids form in a rubble pile. A preliminary simulation tool developed for this purpose is described. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.
Welcome to Pacific Earthquake Engineering Research Center - PEER
Triggering and Effects at Silty Soil Sites" - PEER Research Project Highlight: "Dissipative Base ; Upcoming Events More June 10-13, 2018 Geotechnical Earthquake Engineering and Soil Dynamics V 2018 - Call
NASA Astrophysics Data System (ADS)
Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea
2015-12-01
In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
NASA Astrophysics Data System (ADS)
Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.
2015-12-01
We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.
2000 report on the value pricing pilot program
DOT National Transportation Integrated Search
1997-05-01
This document has been written to provide information on how to apply principles of geotechnical earthquake engineering to planning, design, and retrofit of highway facilities. Geotechnical earthquake engineering topics discussed in this document inc...
DOT National Transportation Integrated Search
1998-12-01
This manual was written to provide training on how to apply principles of geotechnical earthquake engineering to planning, design, and retrofit of highway facilities. Reproduced here are two chapters 4 and 8 in the settlement, respectively. These cha...
Seismic Scenario in the Acambay Graben and Possible Affectations in the Miguel Hidalgo Refinery
NASA Astrophysics Data System (ADS)
Valderrama Membrillo, S.; Aguirre, J.
2015-12-01
In this paper we presented synthetic acceleration records in the Miguel Hidalgo refinery, Hidalgo due to a seismic scenario originated in the graben Acambay, such as occurred in 1912 (70 km distance to it). This earthquake had a magnitude of 6.9 and caused extensive damage, according to reports caused 164 deaths and numerous houses collapsing. To simulate the event of M = 6.9 we used the empirical Greeńs function method proposed by Irikura (1986). Due to the low seismic activity we have not any small earthquake record or an "element earthquake" so that we generated a synthetic seismogram of M = 4.1 to be used as empirical Greeńs function. The seismogram was constructed in two parts. For low frequencies we constructed from cross-correlations of seismic noise, while for high frequencies we made a stochastic simulation. Subsequently, we applied a "matched filter" to join the two frequency bands of synthetic earthquake. For the construction of seismic scenario the method of Irikura (1986) was used. We consider a square fault of 47.75 km long, a radial rupture propagation, rupture velocity of 3.06 m/s, and with the following focal mechanism: strike of 280°, dip of 66 ° and rake of -138 °. With these parameters we obtained the synthetic seismograms. Since there was not any observed earthquake to validate the model, the 1912 event was simulated and then from relationships of intensity (obtained Wald et al.,2005; Sandoval et al., 2013; and Arias, 1969), we estimated the Modified Mercalli Intensity (MMI) for the refinery. We compare our result with isoseismal map obtained by Suter et al. (1996) for the earthquake of 1912. In agreement with Suter, our results shown a MMI V-VI for the Miguel Hidalgo refinery. With this qualitative validation we search the seismic scenario with the higher accelerations and from this synthetic seismogram, we obtained parameters that are of interest in engineering to estimate the possible affectations to the Miguel Hidalgo refinery, such as: PGA, PGV, response spectra, dominant period of significant duration event, and estimated MMI.
Adding fling effects to processed ground‐motion time histories
Kamai, Ronnie; Abrahamson, Norman A.; Graves, Robert
2014-01-01
Fling is the engineering term for the effects of the permanent tectonic offset, caused by a rupturing fault in the recorded ground motions near the fault. It is expressed by a one‐sided pulse in ground velocity and a nonzero final displacement at the end of shaking. Standard processing of earthquake time histories removes some of the fling effects that may be required for engineering applications. A method to parameterize the fling‐step time history and to superimpose it onto traditionally processed time histories has been developed by Abrahamson (2002). In this paper, we first present an update to the Abrahamson (2002) fling‐step models, in which the fling step is parameterized as a single cycle of a sine wave. Parametric models are presented for the sine‐wave amplitude (Dsite) and period (Tf). The expressions for Dsite and Tf are derived from an extensive set of finite‐fault simulations conducted on the Southern California Earthquake Center broadband platform (see Data and Resources). The simulations were run with the Graves and Pitarka (2010) hybrid simulation method and included strike‐slip and reverse scenarios for magnitudes of 6.0–8.2 and dips of 30 through 90. Next, an improved approach for developing design ground motions with fling effects is presented, which deals with the problem of double‐counting intermediate period components that were not removed by the standard ground‐motion processing. Finally, the results are validated against a set of 84 empirical recordings containing fling.
NASA Astrophysics Data System (ADS)
Bostenaru, M.
2009-04-01
The research discussed in this contribution contains two aspects: on one side the economic efficiency of seismic retrofit measures, and on the other their applicability. The research was limited to housing buildings. Bucharest, the capital of Romania, was the object of the research. Strong earthquakes affect Bucharest about three times in a century, the damaging earthquakes of the 20th century being in 1940 and 1977. Other strong earthquakes occurred in 1986 and 1990. Since it is a broad topic, first the building type was determined, which should serve further research. For this scope the building types of the 20th century, which are common in Bucharest, Romania, were investigated. For each building type reports have been written, which comprised the earthquake resilient features, the seismic defficiencies, the damage patterns and the retrofit measures. Each of these features was listed for elements of the building. A first result of the research was an integrated system in order to include latter aspects in the planning in the first steps. So already at the building survey attention has to be paid on how a building is subdivided in order to be able to determine the economic efficiency of the planned action. So were defined the `retrofit elements`. In a first step the characteristics were defined, through which these retrofit elements (for example column, wall part between two windows) can be recognised in the building survey. In a further one, which retrofit measures can be connected to these. Diagrams were built, in order to visualise these findings. For each retrofit element and the corresponding measure the costs were calculated. Also, these retrofit elements and the measures connected to them were modelled for the simulation with the structural software, so that the benefit of the measures could be determined. In the part which regarded the economic efficiency, benefits and costs of retrofit measures had to be compared, so the improvement in the rigidity, ductility and/or strength of the structure at different retrofit measures with its costs. In order to investigate the improvement in the seismic characteristics numerous simulations of the earthquake impact on reinforced concrete frame buildings were conducted and in that context conventional strengthening measures with reinforced concrete and steel were considered. In these reinforced concrete frame buildings interwar buildings from Bucharest were modelled, as these proved to be the most vulnerable in the initial investigation. For the investigation of the economic efficiency also the damages through earthquakes were simulated. With help of a characteristic of the software used so called performance points could be set, so at the end of the simulation it could be seen how strongly was damaged the steel and respectively the concrete in the reinforced concrete element and so was conducted a classification of the strength of the damages in different retrofit elements. These simulations were done for the 1977, 1986 and 1990 earthquakes, as for these the strong motion records were digitally available. For two simple models alternatives of retrofit actions and their locations were fully simulated, while for real building models customised retrofit strategies considering more retrofit elements within the strategy were employed. To the benefit belong not only the improvement of the structural behaviour, as often assumed in earthquake engineering circles. There belong also aesthetical and sociologic aspects. In order to give these aspects their rights, a decision tree was developed, in which the actors are the engineer, the architect, the investor and the user. The retrofit measures were evaluated with two different decision systems. This was the part about the applicability. Further research would serve to see how can be used the developed method for the strategic planning, in which not only single buildings but whole urban areas build the object. The research was funded by the Research Training Network 450 "Natural Disasters" supported by the DFG (German Research Network), 2000-2004 while the result was published with support from a subsequent research project of the author, CA'REDIVIVUS, which continued the research, supported by the European Commission, in 2006.
Study on Earthquake Emergency Evacuation Drill Trainer Development
NASA Astrophysics Data System (ADS)
ChangJiang, L.
2016-12-01
With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.
Road Damage Following Earthquake
NASA Technical Reports Server (NTRS)
1989-01-01
Ground shaking triggered liquefaction in a subsurface layer of water-saturated sand, producing differential lateral and vertical movement in a overlying carapace of unliquified sand and slit, which moved from right to left towards the Pajaro River. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage caused by the Oct. 17, 1989, Loma Prieta earthquake. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: S.D. Ellen, U.S. Geological Survey
Toward real-time regional earthquake simulation of Taiwan earthquakes
NASA Astrophysics Data System (ADS)
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Material contrast does not predict earthquake rupture propagation direction
Harris, R.A.; Day, S.M.
2005-01-01
Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.
A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta
NASA Astrophysics Data System (ADS)
Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.
2015-12-01
Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.
2017-12-01
Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.
2013-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
NASA Astrophysics Data System (ADS)
Sadeghi, H.
2015-12-01
Bridges are major elements of infrastructure in all societies. Their safety and continued serviceability guaranties the transportation and emergency access in urban and rural areas. However, these important structures are subject to earthquake induced damages in structure and foundations. The basic approach to the proper support of foundations are a) distribution of imposed loads to foundation in a way they can resist those loads without excessive settlement and failure; b) modification of foundation ground with various available methods; and c) combination of "a" and "b". The engineers has to face the task of designing the foundations meeting all safely and serviceability criteria but sometimes when there are numerous environmental and financial constrains, the use of some traditional methods become inevitable. This paper explains the application of timber piles to improve ground resistance to liquefaction and to secure the abutments of short to medium length bridges in an earthquake/liquefaction prone area in Bohol Island, Philippines. The limitations of using the common ground improvement methods (i.e., injection, dynamic compaction) because of either environmental or financial concerns along with the abundance of timber in the area made the engineers to use a network of timber piles behind the backwalls of the bridge abutments. The suggested timber pile network is simulated by numerical methods and its safety is examined. The results show that the compaction caused by driving of the piles and bearing capacity provided by timbers reduce the settlement and lateral movements due to service and earthquake induced loads.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
GeoMO 2008--geotechnical earthquake engineering : site response.
DOT National Transportation Integrated Search
2008-10-01
The theme of GeoMO2008 has recently become of more interest to the Midwest civil engineering community due to the perceived earthquake risks and new code requirements. The constant seismic reminder for the New Madrid Seismic Zone and new USGS hazard ...
NASA Astrophysics Data System (ADS)
Khalil, Amin E.; Abdel Hafiez, H. E.; Girgis, Milad; Taha, M. A.
2017-06-01
Strong ground shaking during earthquakes can greatly affect the ancient monuments and subsequently demolish the human heritage. On October 12th 1992, a moderate earthquake (Ms = 5.8) shocked the greater Cairo area causing widespread damages. Unfortunately, the focus of that earthquake is located about 14 km to the south of Zoser pyramid. After the earthquake, the Egyptian Supreme council of antiquities issued an alarm that Zoser pyramid is partially collapsed and international and national efforts are exerted to restore this important human heritage that was built about 4000 years ago. Engineering and geophysical work is thus needed for the restoration process. The definition of the strong motion parameters is one of the required studies since seismically active zone is recorded in its near vicinity. The present study adopted the stochastic method to determine the peak ground motion (acceleration, velocity and displacement) for the three largest earthquakes recorded in the Egypt's seismological history. These earthquakes are Shedwan earthquake with magnitude Ms = 6.9, Aqaba earthquake with magnitude Mw = 7.2 and Cairo (Dahshour earthquake) with magnitude Ms = 5.8. The former two major earthquakes took place few hundred kilometers away. It is logic to have the predominant effects from the epicentral location of the Cairo earthquake; however, the authors wanted to test also the long period effects of the large distance earthquakes expected from the other two earthquakes under consideration. In addition, the dynamic site response was studied using the Horizontal to vertical spectral ratio (HVSR) technique. HVSR can provide information about the fundamental frequency successfully; however, the amplification estimation is not accepted. The result represented as either peak ground motion parameters or response spectra indicates that the effects from Cairo earthquake epicenter are the largest for all periods considered in the present study. The level of strong motion as indicated by peak ground acceleration reaches the value of 250 gals which is considerably high. At the end, it is worth to mention that the information resulted from the present work may be useful for the planned restoration decision of the Zoser pyramid site.
Loss Estimations due to Earthquakes and Secondary Technological Hazards
NASA Astrophysics Data System (ADS)
Frolova, N.; Larionov, V.; Bonnin, J.
2009-04-01
Expected loss and damage assessment due to natural and technological disasters are of primary importance for emergency management just after the disaster, as well as for development and implementation of preventive measures plans. The paper addresses the procedures and simulation models for loss estimations due to strong earthquakes and secondary technological accidents. The mathematical models for shaking intensity distribution, damage to buildings and structures, debris volume, number of fatalities and injuries due to earthquakes and technological accidents at fire and chemical hazardous facilities are considered, which are used in geographical information systems assigned for these purposes. The criteria of technological accidents occurrence are developed on the basis of engineering analysis of past events' consequences. The paper is providing the results of scenario earthquakes consequences estimation and individual seismic risk assessment taking into account the secondary technological hazards at regional and urban levels. The individual risk is understood as the probability of death (or injuries) due to possible hazardous event within one year in a given territory. It is determined through mathematical expectation of social losses taking into account the number of inhabitants in the considered settlement and probability of natural and/or technological disaster.
Housing Damage Following Earthquake
NASA Technical Reports Server (NTRS)
1989-01-01
An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.
Towards Coupling of Macroseismic Intensity with Structural Damage Indicators
NASA Astrophysics Data System (ADS)
Kouteva, Mihaela; Boshnakov, Krasimir
2016-04-01
Knowledge on basic data of ground motion acceleration time histories during earthquakes is essential to understanding the earthquake resistant behaviour of structures. Peak and integral ground motion parameters such as peak ground motion values (acceleration, velocity and displacement), measures of the frequency content of ground motion, duration of strong shaking and various intensity measures play important roles in seismic evaluation of existing facilities and design of new systems. Macroseismic intensity is an earthquake measure related to seismic hazard and seismic risk description. Having detailed ideas on the correlations between the earthquake damage potential and macroseismic intensity is an important issue in engineering seismology and earthquake engineering. Reliable earthquake hazard estimation is the major prerequisite to successful disaster risk management. The usage of advanced earthquake engineering approaches for structural response modelling is essential for reliable evaluation of the accumulated damages in the existing buildings and structures due to the history of seismic actions, occurred during their lifetime. Full nonlinear analysis taking into account single event or series of earthquakes and the large set of elaborated damage indices are suitable contemporary tools to cope with this responsible task. This paper presents some results on the correlation between observational damage states, ground motion parameters and selected analytical damage indices. Damage indices are computed on the base of nonlinear time history analysis of test reinforced structure, characterising the building stock of the Mediterranean region designed according the earthquake resistant requirements in mid XX-th century.
A suite of exercises for verifying dynamic earthquake rupture codes
Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis
2018-01-01
We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.
Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis
2015-01-01
The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.
Organizational changes at Earthquakes & Volcanoes
Gordon, David W.
1992-01-01
Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).
The SCEC/USGS dynamic earthquake rupture code verification exercise
Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.
2009-01-01
Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.
A Viscoelastic earthquake simulator with application to the San Francisco Bay region
Pollitz, Fred F.
2009-01-01
Earthquake simulation on synthetic fault networks carries great potential for characterizing the statistical patterns of earthquake occurrence. I present an earthquake simulator based on elastic dislocation theory. It accounts for the effects of interseismic tectonic loading, static stress steps at the time of earthquakes, and postearthquake stress readjustment through viscoelastic relaxation of the lower crust and mantle. Earthquake rupture initiation and termination are determined with a Coulomb failure stress criterion and the static cascade model. The simulator is applied to interacting multifault systems: one, a synthetic two-fault network, and the other, a fault network representative of the San Francisco Bay region. The faults are discretized both along strike and along dip and can accommodate both strike slip and dip slip. Stress and seismicity functions are evaluated over 30,000 yr trial time periods, resulting in a detailed statistical characterization of the fault systems. Seismicity functions such as the coefficient of variation and a- and b-values exhibit systematic patterns with respect to simple model parameters. This suggests that reliable estimation of the controlling parameters of an earthquake simulator is a prerequisite to the interpretation of its output in terms of seismic hazard.
Mechanics of Granular Materials (MGM) Investigators
NASA Technical Reports Server (NTRS)
2000-01-01
Key persornel in the Mechanics of Granular Materials (MGM) experiment at the University of Colorado at Boulder include Tawnya Ferbiak (software engineer), Susan Batiste (research assistant), and Christina Winkler (graduate research assistant). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
2000-07-01
Key persornel in the Mechanics of Granular Materials (MGM) experiment at the University of Colorado at Boulder include Tawnya Ferbiak (software engineer), Susan Batiste (research assistant), and Christina Winkler (graduate research assistant). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
Seismic shaking scenarios in realistic 3D crustal model of Northern Italy
NASA Astrophysics Data System (ADS)
Molinari, I.; Morelli, A.; Basini, P.; Berbellini, A.
2013-12-01
Simulation of seismic wave propagation in realistic crustal structures is a fundamental tool to evaluate earthquake-generated ground shaking and assess seismic hazard. Current-generation numerical codes, and modern HPC infrastructures, allow for realistic simulations in complex 3D geologic structures. We apply such methodology to the Po Plain in Northern Italy -- a region with relatively rare earthquakes but having large property and industrial exposure, as it became clear during the two M~6 events of May 20-29, 2012. Historical seismicity is well known in this region, with maximum magnitudes estimates reaching M~7, and wave field amplitudes may be significantly amplified by the presence of the very thick sedimentary basin. Our goal is to produce estimates of expected ground shaking in Northern Italy through detailed deterministic simulations of ground motion due to expected earthquakes. We defined a three-dimensional model of the earth's crust using geo-statistical tools to merge the abundant information existing in the form of borehole data and seismic reflection profiles that had been shot in the '70s and the '80s for hydrocarbon exploration. Such information, that has been used by geologists to infer the deep structural setup, had never been merged to build a 3D model to be used for seismological simulations. We implement the model in SPECFEM3D_Cartesian and a hexahedral mesh with elements of ~2km, that allows us to simulate waves with minimum period of ~2 seconds. The model has then been optimized through comparison between simulated and recorded seismograms for the ~20 moderate-magnitude events (Mw > 4.5) that have been instrumentally recorded in the last 15 years. Realistic simulations in the frequency band of most common engineering relevance -- say, ~1 Hz -- at such a large scale would require an extremely detailed structural model, currently not available, and prohibitive computational resources. However, an interest is growing in longer period ground motion -- that impacts on the seismic response of taller structures (Cauzzi and Faccioli, 2008) -- and it is not unusual to consider the wave field up to 20s. In such period range, our Po Plain structural model has shown to be able to reproduce well basin resonance and amplification effects at stations boarding the sedimentary plain. We then simulate seismic shaking scenarios for possible sources tied to devastating historical earthquakes that are known to have occurred in the region --- such as the M~6 event that hit Modena in 1501; and the Verona, M~6.7 in 1117, quake that caused well-documented strong effects in an unusually wide area with radius of hundreds of kilometers. We explore different source geometries and rupture histories for each earthquake. We mainly focus our attention on the synthesis of the prominent surface waves that are highly amplified in deep sedimentary basin structures (e.g., Smerzini et al, 2011; Koketsu and Miyage, 2008). Such simulations hold high relevance because of the large local property exposure, due to extensive industrial and touristic infrastructure. We show that deterministic ground motion calculation can indeed provide information to be actively used to mitigate the effects of desctructive earthquakes on critical infrastructures.
NASA Astrophysics Data System (ADS)
Mert, A.
2016-12-01
The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
NASA Astrophysics Data System (ADS)
Li, Linlin; Switzer, Adam D.; Wang, Yu; Chan, Chung-Han; Qiu, Qiang; Weiss, Robert
2017-04-01
Current tsunami inundation maps are commonly generated using deterministic scenarios, either for real-time forecasting or based on hypothetical "worst-case" events. Such maps are mainly used for emergency response and evacuation planning and do not include the information of return period. However, in practice, probabilistic tsunami inundation maps are required in a wide variety of applications, such as land-use planning, engineer design and for insurance purposes. In this study, we present a method to develop the probabilistic tsunami inundation map using a stochastic earthquake source model. To demonstrate the methodology, we take Macau a coastal city in the South China Sea as an example. Two major advances of this method are: it incorporates the most updated information of seismic tsunamigenic sources along the Manila megathrust; it integrates a stochastic source model into a Monte Carlo-type simulation in which a broad range of slip distribution patterns are generated for large numbers of synthetic earthquake events. When aggregated the large amount of inundation simulation results, we analyze the uncertainties associated with variability of earthquake rupture location and slip distribution. We also explore how tsunami hazard evolves in Macau in the context of sea level rise. Our results suggest Macau faces moderate tsunami risk due to its low-lying elevation, extensive land reclamation, high coastal population and major infrastructure density. Macau consists of four districts: Macau Peninsula, Taipa Island, Coloane island and Cotai strip. Of these Macau Peninsula is the most vulnerable to tsunami due to its low-elevation and exposure to direct waves and refracted waves from the offshore region and reflected waves from mainland. Earthquakes with magnitude larger than Mw8.0 in the northern Manila trench would likely cause hazardous inundation in Macau. Using a stochastic source model, we are able to derive a spread of potential tsunami impacts for earthquakes with the same magnitude. The diversity is caused by both random rupture locations and heterogeneous slip distribution. Adding the sea level rise component, the inundated depth caused by 1 m sea level rise is equivalent to the one caused by 90 percentile of an ensemble of Mw8.4 earthquakes.
NASA Astrophysics Data System (ADS)
Cruz, H.; Furumura, T.; Chavez-Garcia, F. J.
2002-12-01
The estimation of scenarios of the strong ground motions caused by future great earthquakes is an important problem in strong motion seismology. This was pointed out by the great 1985 Michoacan earthquake, which caused a great damage in Mexico City, 300 km away from the epicenter. Since the seismic wavefield is characterized by the source, path and site effects, the pattern of strong motion damage from different types of earthquakes should differ significantly. In this study, the scenarios for intermediate-depth normal-faulting, shallow-interplate thrust faulting, and crustal earthquakes have been estimated using a hybrid simulation technique. The character of the seismic wavefield propagating from the source to Mexico City for each earthquake was first calculated using the pseudospectral method for 2D SH waves. The site amplifications in the shallow structure of Mexico City are then calculated using the multiple SH wave reverberation theory. The scenarios of maximum ground motion for both inslab and interplate earthquakes obtained by the simulation show a good agreement with the observations. This indicates the effectiveness of the hybrid simulation approach to investigate the strong motion damage for future earthquakes.
Earthquake alarm; operating the seismograph station at the University of California, Berkeley.
Stump, B.
1980-01-01
At the University of California seismographic stations, the task of locating and determining magnitudes for both local and distant earthquakes is a continuous one. Teleseisms must be located rapidly so that events that occur in the Pacific can be identified and the Pacific Tsunami Warning System alerted. For great earthquakes anywhere, there is a responsibility to notify public agencies such as the California Office of Emergency Services, the Federal Disaster Assistance Administration, the Earthquake Engineering Research Institute, the California Seismic Safety Commission, and the American Red Cross. In the case of damaging local earthquakes, it is necessary to alert also the California Department of Water Resources, California Division of Mines and Geology, U.S Army Corps of Engineers, Federal Bureau of Reclamation, and the Bay Area Rapid Transit. These days, any earthquakes that are felt in northern California cause immediate inquiries from the news media and an interested public. The series of earthquakes that jolted the Livermore area from January 24 to 26 1980, is a good case in point.
EU H2020 SERA: Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe
NASA Astrophysics Data System (ADS)
Giardini, Domenico; Saleh, Kauzar; SERA Consortium, the
2017-04-01
SERA - Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe - is a new infrastructure project awarded in the last Horizon 2020 call for Integrating Activities for Advanced Communities (INFRAIA-01-2016-2017). Building up on precursor projects like NERA, SHARE, NERIES, SERIES, etc., SERA is expected to contribute significantly to the access of data, services and research infrastructures, and to develop innovative solutions in seismology and earthquake engineering, with the overall objective of reducing the exposure to risks associated to natural and anthropogenic earthquakes. For instance, SERA will revise the European Seismic Hazard reference model for input in the current revision of the Eurocode 8 on Seismic Design of Buildings; we also foresee to develop the first comprehensive framework for seismic risk modeling at European scale, and to develop new standards for future experimental observations and instruments for earthquake engineering and seismology. To that aim, SERA is engaging 31 institutions across Europe with leading expertise in the operation of research facilities, monitoring infrastructures, data repositories and experimental facilities in the fields of seismology, anthropogenic hazards and earthquake engineering. SERA comprises 26 activities, including 5 Networking Activities (NA) to improve the availability and access of data through enhanced community coordination and pooling of resources, 6 Joint Research Activities (JRA) aimed at creating new European standards for the optimal use of the data collected by the European infrastructures, Virtual Access (VA) to the 5 main European services for seismology and engineering seismology, and Trans-national Access (TA) to 10 high-class experimental facilities for earthquake engineering and seismology in Europe. In fact, around 50% of the SERA resources will be dedicated to virtual and transnational access. SERA and EPOS (European Platform Observing System, a European Research Infrastructure Consortium for solid Earth services in Europe) will be developed in parallel, giving SERA the capacity to develop building blocks for EPOS in the areas of seismology, anthropogenic hazards and seismic engineering, such as new virtual access, new anthropogenic hazards products, expanded access to waveform data, etc. In addition, services developed and validated in SERA will be produced in a way that is compatible for integration in EPOS. This communication is aimed at informing the scientific community about the objectives and workplan of SERA, starting in spring 2017 for a duration of 3 years.
Sizing up earthquake damage: Differing points of view
Hough, S.; Bolen, A.
2007-01-01
When a catastrophic event strikes an urban area, many different professionals hit the ground running. Emergency responders respond, reporters report, and scientists and engineers collect and analyze data. Journalists and scientists may share interest in these events, but they have very different missions. To a journalist, earthquake damage is news. To a scientist or engineer, earthquake damage represents a valuable source of data that can help us understand how strongly the ground shook as well as how particular structures responded to the shaking.
NGA West 2 | Pacific Earthquake Engineering Research Center
, multi-year research program to improve Next Generation Attenuation models for active tectonic regions earthquake engineering, including modeling of directivity and directionality; verification of NGA-West models epistemic uncertainty; and evaluation of soil amplification factors in NGA models versus NEHRP site factors
Holzer, Thomas L.
1998-01-01
This chapter contains two papers that summarize the performance of engineered earth structures, dams and stabilized excavations in soil, and two papers that characterize for engineering purposes the attenuation of ground motion with distance during the Loma Prieta earthquake. Documenting the field performance of engineered structures and confirming empirically based predictions of ground motion are critical for safe and cost effective seismic design of future structures as well as the retrofitting of existing ones.
NASA Astrophysics Data System (ADS)
Aochi, Hideo
2014-05-01
The Marmara region (Turkey) along the North Anatolian fault is known as a high potential of large earthquakes in the next decades. For the purpose of seismic hazard/risk evaluation, kinematic and dynamic source models have been proposed (e.g. Oglesby and Mai, GJI, 2012). In general, the simulated earthquake scenarios depend on the hypothesis and cannot be verified before the expected earthquake. We then introduce a probabilistic insight to give the initial/boundary conditions to statistically analyze the simulated scenarios. We prepare different fault geometry models, tectonic loading and hypocenter locations. We keep the same framework of the simulation procedure as the dynamic rupture process of the adjacent 1999 Izmit earthquake (Aochi and Madariaga, BSSA, 2003), as the previous models were able to reproduce the seismological/geodetic aspects of the event. Irregularities in fault geometry play a significant role to control the rupture progress, and a relatively large change in geometry may work as barriers. The variety of the simulate earthquake scenarios should be useful for estimating the variety of the expected ground motion.
Estimating the Maximum Magnitude of Induced Earthquakes With Dynamic Rupture Simulations
NASA Astrophysics Data System (ADS)
Gilmour, E.; Daub, E. G.
2017-12-01
Seismicity in Oklahoma has been sharply increasing as the result of wastewater injection. The earthquakes, thought to be induced from changes in pore pressure due to fluid injection, nucleate along existing faults. Induced earthquakes currently dominate central and eastern United States seismicity (Keranen et al. 2016). Induced earthquakes have only been occurring in the central US for a short time; therefore, too few induced earthquakes have been observed in this region to know their maximum magnitude. The lack of knowledge regarding the maximum magnitude of induced earthquakes means that large uncertainties exist in the seismic hazard for the central United States. While induced earthquakes follow the Gutenberg-Richter relation (van der Elst et al. 2016), it is unclear if there are limits to their magnitudes. An estimate of the maximum magnitude of the induced earthquakes is crucial for understanding their impact on seismic hazard. While other estimates of the maximum magnitude exist, those estimates are observational or statistical, and cannot take into account the possibility of larger events that have not yet been observed. Here, we take a physical approach to studying the maximum magnitude based on dynamic ruptures simulations. We run a suite of two-dimensional ruptures simulations to physically determine how ruptures propagate. The simulations use the known parameters of principle stress orientation and rupture locations. We vary the other unknown parameters of the ruptures simulations to obtain a large number of rupture simulation results reflecting different possible sets of parameters, and use these results to train a neural network to complete the ruptures simulations. Then using a Markov Chain Monte Carlo method to check different combinations of parameters, the trained neural network is used to create synthetic magnitude-frequency distributions to compare to the real earthquake catalog. This method allows us to find sets of parameters that are consistent with earthquakes observed in Oklahoma and find which parameters effect the rupture propagation. Our results show that the stress orientation and magnitude, pore pressure, and friction properties combine to determine the final magnitude of the simulated event.
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.
Bizzarri, A.; Dunham, Eric M.; Spudich, P.
2010-01-01
We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation of 5%-damped absolute response spectral accelerations (SA) in the period band 0.05–0.4 s observed at stations that presumably experienced Mach pulses during the 1979 Imperial Valley, 1999 Kocaeli, and 2002 Denali Fault earthquakes compared to SA observed at non-Mach pulse stations in the same earthquakes. A 20% amplification of short period SA is seen only at a few of the Imperial Valley stations closest to the fault. This lack of elevated SA suggests that either Mach pulses in real earthquakes are even more incoherent that in our simulations or that Mach pulses are vulnerable to attenuation through nonlinear soil response. In any case, this result might imply that current engineering models of high frequency earthquake ground motions do not need to be modified by more than 20% close to the fault to account for Mach pulses, provided that the existing data are adequately representative of ground motions from supershear earthquakes.
A Coupled Earthquake-Tsunami Simulation Framework Applied to the Sumatra 2004 Event
NASA Astrophysics Data System (ADS)
Vater, Stefan; Bader, Michael; Behrens, Jörn; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Wollherr, Stephanie; van Zelst, Iris
2017-04-01
Large earthquakes along subduction zone interfaces have generated destructive tsunamis near Chile in 1960, Sumatra in 2004, and northeast Japan in 2011. In order to better understand these extreme events, we have developed tools for physics-based, coupled earthquake-tsunami simulations. This simulation framework is applied to the 2004 Indian Ocean M 9.1-9.3 earthquake and tsunami, a devastating event that resulted in the loss of more than 230,000 lives. The earthquake rupture simulation is performed using an ADER discontinuous Galerkin discretization on an unstructured tetrahedral mesh with the software SeisSol. Advantages of this approach include accurate representation of complex fault and sea floor geometries and a parallelized and efficient workflow in high-performance computing environments. Accurate and efficient representation of the tsunami evolution and inundation at the coast is achieved with an adaptive mesh discretizing the shallow water equations with a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme. With the application of the framework to this historic event, we aim to better understand the involved mechanisms between the dynamic earthquake within the earth's crust, the resulting tsunami wave within the ocean, and the final coastal inundation process. Earthquake model results are constrained by GPS surface displacements and tsunami model results are compared with buoy and inundation data. This research is part of the ASCETE Project, "Advanced Simulation of Coupled Earthquake and Tsunami Events", funded by the Volkswagen Foundation.
1980-01-01
standard procedure for Analysis of all types of civil engineering struc- tures. Early in its development, it became apparent that this method had...unique potentialities in the evaluation of stress in dams, and many of its earliest civil engineering applications concerned special problems associated...with such structures [3,4]. The earliest dynamic finite element analyses of civil engineering structures involved the earthquake response analysis of
NASA Astrophysics Data System (ADS)
Dahm, Torsten; Heimann, Sebastian; Funke, Sigward; Wendt, Siegfried; Rappsilber, Ivo; Bindi, Dino; Plenefisch, Thomas; Cotton, Fabrice
2018-05-01
On April 29, 2017 at 0:56 UTC (2:56 local time), an M W = 2.8 earthquake struck the metropolitan area between Leipzig and Halle, Germany, near the small town of Markranstädt. The earthquake was felt within 50 km from the epicenter and reached a local intensity of I 0 = IV. Already in 2015 and only 15 km northwest of the epicenter, a M W = 3.2 earthquake struck the area with a similar large felt radius and I 0 = IV. More than 1.1 million people live in the region, and the unusual occurrence of the two earthquakes led to public attention, because the tectonic activity is unclear and induced earthquakes have occurred in neighboring regions. Historical earthquakes south of Leipzig had estimated magnitudes up to M W ≈ 5 and coincide with NW-SE striking crustal basement faults. We use different seismological methods to analyze the two recent earthquakes and discuss them in the context of the known tectonic structures and historical seismicity. Novel stochastic full waveform simulation and inversion approaches are adapted for the application to weak, local earthquakes, to analyze mechanisms and ground motions and their relation to observed intensities. We find NW-SE striking normal faulting mechanisms for both earthquakes and centroid depths of 26 and 29 km. The earthquakes are located where faults with large vertical offsets of several hundred meters and Hercynian strike have developed since the Mesozoic. We use a stochastic full waveform simulation to explain the local peak ground velocities and calibrate the method to simulate intensities. Since the area is densely populated and has sensitive infrastructure, we simulate scenarios assuming that a 12-km long fault segment between the two recent earthquakes is ruptured and study the impact of rupture parameters on ground motions and expected damage.
NASA Astrophysics Data System (ADS)
Vater, Stefan; Behrens, Jörn
2017-04-01
Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.
Transportations Systems Modeling and Applications in Earthquake Engineering
2010-07-01
49 Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g)............... 50...Memphis, Tennessee. The NMSZ was responsible for the devastating 1811-1812 New Madrid earthquakes , the largest earthquakes ever recorded in the...Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g) Table 1 Fragility parameters for MSC steel bridge (Padgett 2007
Introduction: seismology and earthquake engineering in Central and South America.
Espinosa, A.F.
1983-01-01
Reports the state-of-the-art in seismology and earthquake engineering that is being advanced in Central and South America. Provides basic information on seismological station locations in Latin America and some of the programmes in strong-motion seismology, as well as some of the organizations involved in these activities.-from Author
Learning from physics-based earthquake simulators: a minimal approach
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2017-04-01
Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.
Initiatives to Reduce Earthquake Risk of Developing Countries
NASA Astrophysics Data System (ADS)
Tucker, B. E.
2008-12-01
The seventeen-year-and-counting history of the Palo Alto-based nonprofit organization GeoHazards International (GHI) is the story of many initiatives within a larger initiative to increase the societal impact of geophysics and civil engineering. GHI's mission is to reduce death and suffering due to earthquakes and other natural hazards in the world's most vulnerable communities through preparedness, mitigation and advocacy. GHI works by raising awareness in these communities about their risk and about affordable methods to manage it, identifying and strengthening institutions in these communities to manage their risk, and advocating improvement in natural disaster management. Some of GHI's successful initiatives include: (1) creating an earthquake scenario for Quito, Ecuador that describes in lay terms the consequences for that city of a probable earthquake; (2) improving the curricula of Pakistani university courses about seismic retrofitting; (3) training employees of the Public Works Department of Delhi, India on assessing the seismic vulnerability of critical facilities such as a school, a hospital, a police headquarters, and city hall; (4) assessing the vulnerability of the Library of Tibetan Works and Archives in Dharamsala, India; (5) developing a seismic hazard reduction plan for a nonprofit organization in Kathmandu, Nepal that works to manage Nepal's seismic risk; and (6) assisting in the formulation of a resolution by the Council of the Organization for Economic Cooperation and Development (OECD) to promote school earthquake safety among OECD member countries. GHI's most important resource, in addition to its staff and Board of Trustees, is its members and volunteer advisors, who include some of the world's leading earth scientists, earthquake engineers, urban planners and architects, from the academic, public, private and nonprofit sectors. GHI is planning several exciting initiatives in the near future. One would oversee the design and construction of an earthquake- and tsunami-resistant structure in Sumatra to house a tsunami museum, a community training center, and offices of a local NGO that is preparing Padang for the next tsunami. This facility would be designed and built by a team of US and Indonesian academics, architects, engineers and students. Another initiative would launch a collaborative research program on school earthquake safety with the scientists and engineers from the US and the ten Islamic countries that comprise the Economic Cooperation Organization. Finally, GHI hopes to develop internet and satellite communication techniques that will allow earthquake risk managers in the US to interact with masons, government officials, engineers and architects in remote communities of vulnerable developing countries, closing the science and engineering divide.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
Recent Earhquake and Tsunami Preparedness training activities in DPEU KOERI
NASA Astrophysics Data System (ADS)
Puskulcu, Seyhun; Tanırcan, Gulum
2017-04-01
The Disaster Preparedness Education Unit (DPEU) at Bogazici University's Kandilli Observatory and Earthquake Research Institute (KOERI) that was iestablished after 1999 Kocaeli earthquake and has been continuing to develop high-quality curricula and training materials for community-focused disaster preparedness education through countrywide. The unit works to build bridges between scientists, academics and technical experts in this field, and the people who need access to knowledge to reduce their risk from disasters and develops disaster preparedness training materials, organizes and conducts teacher trainings, and participates in research activities on these topics. DPEU also accommodates the Earthquake Park, where training courses are supported with an earthquake simulator. It hosts more then 4000 students every year for training of how to behave before, during and after an earthquake occurs. In addition to theoretical knowledge, simulation of isolated and fix based 10 storey building models were created at Earthquake Park for rising student's structural awareness . The unit also is involving many national and international projects. DPEU is very actively involved the recent international MarDIM (Earthquake and Tsunami Disaster Mitigation an the Marmara Region and Disaster Education in Turkey) Project which is performing by many Turkish and Japanese institution h and produced the tsunami education booklet, video, a cartoon movie and serviced many training of Earthquake Park. DPEU has also a Mobile Earthquake Simulation Training Truck developed in 2007, aiming to create a stage for community awareness for the earthquake preparedness and to change the common wrong perception and ignorance on the natural event of earthquakes. 500 thousands people have been trained by simulation truck all over Turkey within 5 years. DPEU just started to train the house wifes located in Marmara region on earthquake and tsunami preparedness with the collaboration of several municipalities in Istanbul.
Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke
2016-05-10
We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.
Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke
2016-01-01
We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897
Engineering aspects of seismological studies in Peru
Ocola, L.
1982-01-01
In retrospect, the Peruvian national long-range earthquake-study program began after the catastrophic earthquake of May 31, 1970. This earthquake triggered a large snow avalanche from Huascaran mountain, killing over 60,000 people, and covering with mud small cities and tens of villages in the Andean valley of Callejon de Huaylas, Huaraz. Since then, great efforts have been made to learn about the natural seismic environment and its engineering and social aspects. The Organization of American States (OAS)has been one of the most important agencies in the development of the program.
Earthquake: Game-based learning for 21st century STEM education
NASA Astrophysics Data System (ADS)
Perkins, Abigail Christine
To play is to learn. A lack of empirical research within game-based learning literature, however, has hindered educational stakeholders to make informed decisions about game-based learning for 21st century STEM education. In this study, I modified a research and development (R&D) process to create a collaborative-competitive educational board game illuminating elements of earthquake engineering. I oriented instruction- and game-design principles around 21st century science education to adapt the R&D process to develop the educational game, Earthquake. As part of the R&D, I evaluated Earthquake for empirical evidence to support the claim that game-play results in student gains in critical thinking, scientific argumentation, metacognitive abilities, and earthquake engineering content knowledge. I developed Earthquake with the aid of eight focus groups with varying levels of expertise in science education research, teaching, administration, and game-design. After developing a functional prototype, I pilot-tested Earthquake with teacher-participants (n=14) who engaged in semi-structured interviews after their game-play. I analyzed teacher interviews with constant comparison methodology. I used teachers' comments and feedback from content knowledge experts to integrate game modifications, implementing results to improve Earthquake. I added player roles, simplified phrasing on cards, and produced an introductory video. I then administered the modified Earthquake game to two groups of high school student-participants (n = 6), who played twice. To seek evidence documenting support for my knowledge claim, I analyzed videotapes of students' game-play using a game-based learning checklist. My assessment of learning gains revealed increases in all categories of students' performance: critical thinking, metacognition, scientific argumentation, and earthquake engineering content knowledge acquisition. Players in both student-groups improved mostly in critical thinking, having doubled the number of exhibited instances of critical thinking between games. Players in the first group exhibited about a third more instances of metacognition between games, while players in the second group doubled such instances. Between games, players in both groups more than doubled the number of exhibited instances of using earthquake engineering content knowledge. The student-players expanded use of scientific argumentation for all game-based learning checklist categories. With empirical evidence, I conclude play and learning can connect for successful 21 st century STEM education.
NASA Astrophysics Data System (ADS)
Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali
2016-08-01
The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.
Sand Volcano Following Earthquake
NASA Technical Reports Server (NTRS)
1989-01-01
Sand boil or sand volcano measuring 2 m (6.6 ft.) in length erupted in median of Interstate Highway 80 west of the Bay Bridge toll plaza when ground shaking transformed loose water-saturated deposit of subsurface sand into a sand-water slurry (liquefaction) in the October 17, 1989, Loma Prieta earthquake. Vented sand contains marine-shell fragments. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: J.C. Tinsley, U.S. Geological Survey)
1989-10-17
Ground shaking triggered liquefaction in a subsurface layer of water-saturated sand, producing differential lateral and vertical movement in a overlying carapace of unliquified sand and slit, which moved from right to left towards the Pajaro River. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage caused by the Oct. 17, 1989, Loma Prieta earthquake. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: S.D. Ellen, U.S. Geological Survey
Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak
2012-01-01
In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault model, allcal2. With the exception of ViscoSim, which ran for 10,000 years, all the simulators ran for 30,000 years. Presentations containing content similar to this paper can be found at http://scec.usc.edu/research/eqsims/.
National Earthquake Hazards Reduction Program; time to expand
Steinbrugge, K.V.
1990-01-01
All of us in earthquake engineering, seismology, and many related disciplines have been directly or indirectly affected by the National Earthquake Hazards Reduction Program (NEHRP). This program was the result of the Earthquake Hazards Reduction Act of 1977 (Public Law 95-124). With well over a decade of experience, should this expression of public policy now take a different or expanded role?
The Road to Total Earthquake Safety
NASA Astrophysics Data System (ADS)
Frohlich, Cliff
Cinna Lomnitz is possibly the most distinguished earthquake seismologist in all of Central and South America. Among many other credentials, Lomnitz has personally experienced the shaking and devastation that accompanied no fewer than five major earthquakes—Chile, 1939; Kern County, California, 1952; Chile, 1960; Caracas,Venezuela, 1967; and Mexico City, 1985. Thus he clearly has much to teach someone like myself, who has never even actually felt a real earthquake.What is this slim book? The Road to Total Earthquake Safety summarizes Lomnitz's May 1999 presentation at the Seventh Mallet-Milne Lecture, sponsored by the Society for Earthquake and Civil Engineering Dynamics. His arguments are motivated by the damage that occurred in three earthquakes—Mexico City, 1985; Loma Prieta, California, 1989; and Kobe, Japan, 1995. All three quakes occurred in regions where earthquakes are common. Yet in all three some of the worst damage occurred in structures located a significant distance from the epicenter and engineered specifically to resist earthquakes. Some of the damage also indicated that the structures failed because they had experienced considerable rotational or twisting motion. Clearly, Lomnitz argues, there must be fundamental flaws in the usually accepted models explaining how earthquakes generate strong motions, and how we should design resistant structures.
Procedures for Computing Site Seismicity
1994-02-01
Fourth World Conference on Earthquake Engineering, Santiago, Chile , 1969. Schnabel, P.B., J. Lysmer, and H.B. Seed (1972). SHAKE, a computer program for...This fault system is composed of the Elsinore and Whittier fault zones, Agua Caliente fault, and Earthquake Valley fault. Five recent earthquakes of
10 CFR 100.23 - Geologic and seismic siting criteria.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Earthquake Ground Motion, and to permit adequate engineering solutions to actual or potential geologic and..., earthquake recurrence rates, fault geometry and slip rates, site foundation material, and seismically induced... Earthquake Ground Motion for the site, the potential for surface tectonic and nontectonic deformations, the...
10 CFR 100.23 - Geologic and seismic siting criteria.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Earthquake Ground Motion, and to permit adequate engineering solutions to actual or potential geologic and..., earthquake recurrence rates, fault geometry and slip rates, site foundation material, and seismically induced... Earthquake Ground Motion for the site, the potential for surface tectonic and nontectonic deformations, the...
10 CFR 100.23 - Geologic and seismic siting criteria.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Earthquake Ground Motion, and to permit adequate engineering solutions to actual or potential geologic and..., earthquake recurrence rates, fault geometry and slip rates, site foundation material, and seismically induced... Earthquake Ground Motion for the site, the potential for surface tectonic and nontectonic deformations, the...
Akkar, Sinan; Aldemir, A.; Askan, A.; Bakir, S.; Canbay, E.; Demirel, I.O.; Erberik, M.A.; Gulerce, Z.; Gulkan, Polat; Kalkan, Erol; Prakash, S.; Sandikkaya, M.A.; Sevilgen, V.; Ugurhan, B.; Yenier, E.
2011-01-01
An earthquake of MW = 6.1 occurred in the Elazığ region of eastern Turkey on 8 March 2010 at 02:32:34 UTC. The United States Geological Survey (USGS) reported the epicenter of the earthquake as 38.873°N-39.981°E with a focal depth of 12 km. Forty-two people lost their lives and 137 were injured during the event. The earthquake was reported to be on the left-lateral strike-slip east Anatolian fault (EAF), which is one of the two major active fault systems in Turkey. Teams from the Earthquake Engineering Research Center of the Middle East Technical University (EERC-METU) visited the earthquake area in the aftermath of the mainshock. Their reconnaissance observations were combined with interpretations of recorded ground motions for completeness. This article summarizes observations on building and ground damage in the area and provides a discussion of the recorded motions. No significant observations in terms of geotechnical engineering were made.
PEER - National Information Service for Earthquake Engineering - NISEE
Information Service for Earthquake Engineering - NISEE The NISEE/PEER Library is an affiliated library of the Field Station, five miles from the main Berkeley campus. Address NISEE/PEER Library, University of Regents Hours Monday - Friday, 9:00 am - 1:00 pm Open to the public NISEE/PEER Library home page. 325
Long-period building response to earthquakes in the San Francisco Bay Area
Olsen, A.H.; Aagaard, Brad T.; Heaton, T.H.
2008-01-01
This article reports a study of modeled, long-period building responses to ground-motion simulations of earthquakes in the San Francisco Bay Area. The earthquakes include the 1989 magnitude 6.9 Loma Prieta earthquake, a magnitude 7.8 simulation of the 1906 San Francisco earthquake, and two hypothetical magnitude 7.8 northern San Andreas fault earthquakes with hypocenters north and south of San Francisco. We use the simulated ground motions to excite nonlinear models of 20-story, steel, welded moment-resisting frame (MRF) buildings. We consider MRF buildings designed with two different strengths and modeled with either ductile or brittle welds. Using peak interstory drift ratio (IDR) as a performance measure, the stiffer, higher strength building models outperform the equivalent more flexible, lower strength designs. The hypothetical magnitude 7.8 earthquake with hypocenter north of San Francisco produces the most severe ground motions. In this simulation, the responses of the more flexible, lower strength building model with brittle welds exceed an IDR of 2.5% (that is, threaten life safety) on 54% of the urban area, compared to 4.6% of the urban area for the stiffer, higher strength building with ductile welds. We also use the simulated ground motions to predict the maximum isolator displacement of base-isolated buildings with linear, single-degree-of-freedom (SDOF) models. For two existing 3-sec isolator systems near San Francisco, the design maximum displacement is 0.5 m, and our simulations predict isolator displacements for this type of system in excess of 0.5 m in many urban areas. This article demonstrates that a large, 1906-like earthquake could cause significant damage to long-period buildings in the San Francisco Bay Area.
Dynamic 3D simulations of earthquakes on en echelon faults
Harris, R.A.; Day, S.M.
1999-01-01
One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.
Pseudo-dynamic source characterization accounting for rough-fault effects
NASA Astrophysics Data System (ADS)
Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin
2016-04-01
Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.
Turkish Compulsory Earthquake Insurance (TCIP)
NASA Astrophysics Data System (ADS)
Erdik, M.; Durukal, E.; Sesetyan, K.
2009-04-01
Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.
2000-07-01
Engineering bench system hardware for the Mechanics of Granular Materials (MGM) experiment is tested on a lab bench at the University of Colorado in Boulder. This is done in a horizontal arrangement to reduce pressure differences so the tests more closely resemble behavior in the microgravity of space. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
2000-07-01
What appear to be boulders fresh from a tumble down a mountain are really grains of Ottawa sand, a standard material used in civil engineering tests and also used in the Mechanics of Granular Materials (MGM) experiment. The craggy surface shows how sand grans have faces that can cause friction as they roll and slide against each other, or even causing sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM uses the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. These images are from an Electron Spectroscopy for Chemical Analysis (ESCA) study conducted by Dr. Binayak Panda of IITRI for Marshall Space Flight Center (MSFC). (Credit: NASA/MSFC)
Mechanics of Granular Materials (MGM0 Flight Hardware in Bench Test
NASA Technical Reports Server (NTRS)
2000-01-01
Engineering bench system hardware for the Mechanics of Granular Materials (MGM) experiment is tested on a lab bench at the University of Colorado in Boulder. This is done in a horizontal arrangement to reduce pressure differences so the tests more closely resemble behavior in the microgravity of space. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
NASA Astrophysics Data System (ADS)
Tsuda, K.; Dorjapalam, S.; Dan, K.; Ogawa, S.; Watanabe, T.; Uratani, H.; Iwase, S.
2012-12-01
The 2011 Tohoku-Oki earthquake (M9.0) produced some distinct features such as huge slips on the order of several ten meters around the shallow part of the fault and different areas with radiating seismic waves for different periods (e.g., Lay et al., 2012). These features, also reported during the past mega-thrust earthquakes in the subduction zone such as the 2004 Sumatra earthquake (M9.2) and the 2010 Chile earthquake (M8.8), get attentions as the distinct features if the rupture of the mega-thrust earthquakes reaches to the shallow part of the fault plane. Although various kinds of observations for the seismic behavior (rupture process and ground motion characteristics etc.) on the shallow part of the fault plane during the mega-trust earthquakes have been reported, the number of analytical or numerical studies based on dynamic simulation is still limited. Wendt et al. (2009), for example, revealed that the different distribution of initial stress produces huge differences in terms of the seismic behavior and vertical displacements on the surface. In this study, we carried out the dynamic simulations in order to get a better understanding about the seismic behavior on the shallow part of the fault plane during mega-thrust earthquakes. We used the spectral element method (Ampuero, 2009) that is able to incorporate the complex fault geometry into simulation as well as to save computational resources. The simulation utilizes the slip-weakening law (Ida, 1972). In order to get a better understanding about the seismic behavior on the shallow part of the fault plane, some parameters controlling seismic behavior for dynamic faulting such as critical slip distance (Dc), initial stress conditions and friction coefficients were changed and we also put the asperity on the fault plane. These understandings are useful for the ground motion prediction for future mega-thrust earthquakes such as the earthquakes along the Nankai Trough.
Designing an Earthquake-Proof Art Museum: An Arts- and Engineering-Integrated Science Lesson
ERIC Educational Resources Information Center
Carignan, Anastasia; Hussain, Mahjabeen
2016-01-01
In this practical arts-integrated science and engineering lesson, an inquiry-based approach was adopted to teach a class of fourth graders in a Midwest elementary school about the scientific concepts of plate tectonics and earthquakes. Lessons were prepared following the 5 E instructional model. Next Generation Science Standards (4-ESS3-2) and the…
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
Numerical simulation analysis on Wenchuan seismic strong motion in Hanyuan region
NASA Astrophysics Data System (ADS)
Chen, X.; Gao, M.; Guo, J.; Li, Z.; Li, T.
2015-12-01
69227 deaths, 374643 injured, 17923 people missing, direct economic losses 845.1 billion, and a large number houses collapse were caused by Wenchuan Ms8 earthquake in Sichuan Province on May 12, 2008, how to reproduce characteristics of its strong ground motion and predict its intensity distribution, which have important role to mitigate disaster of similar giant earthquake in the future. Taking Yunnan-Sichuan Province, Wenchuan town, Chengdu city, Chengdu basin and its vicinity as the research area, on the basis of the available three-dimensional velocity structure model and newly topography data results from ChinaArray of Institute of Geophysics, China Earthquake Administration, 2 type complex source rupture process models with the global and local source parameters are established, we simulated the seismic wave propagation of Wenchuan Ms8 earthquake throughout the whole three-dimensional region by the GMS discrete grid finite-difference techniques with Cerjan absorbing boundary conditions, and obtained the seismic intensity distribution in this region through analyzing 50×50 stations data (simulated ground motion output station). The simulated results indicated that: (1)Simulated Wenchuan earthquake ground motion (PGA) response and the main characteristics of the response spectrum are very similar to those of the real Wenchuan earthquake records. (2)Wenchuan earthquake ground motion (PGA) and the response spectra of the Plain are much greater than that of the left Mountain area because of the low velocity of the shallow surface media and the basin effect of the Chengdu basin structure. Simultaneously, (3) the source rupture process (inversion) with far-field P-wave, GPS data and InSAR information and the Longmenshan Front Fault (source rupture process) are taken into consideration in GMS numerical simulation, significantly different waveform and frequency component of the ground motion are obtained, though the strong motion waveform is distinct asymmetric, which should be much more real. It indicated that the Longmenshan Front Fault may be also involved in seismic activity during the long time(several minutes) Wenchuan earthquake process. (4) Simulated earthquake records in Hanyuan region are indeed very strong, which reveals source mechanism is one reason of Hanyuan intensity abnormaly.
Broadband Ground Motion Simulation Recipe for Scenario Hazard Assessment in Japan
NASA Astrophysics Data System (ADS)
Koketsu, K.; Fujiwara, H.; Irikura, K.
2014-12-01
The National Seismic Hazard Maps for Japan, which consist of probabilistic seismic hazard maps (PSHMs) and scenario earthquake shaking maps (SESMs), have been published every year since 2005 by the Earthquake Research Committee (ERC) in the Headquarter for Earthquake Research Promotion, which was established in the Japanese government after the 1995 Kobe earthquake. The publication was interrupted due to problems in the PSHMs revealed by the 2011 Tohoku earthquake, and the Subcommittee for Evaluations of Strong Ground Motions ('Subcommittee') has been examining the problems for two and a half years (ERC, 2013; Fujiwara, 2014). However, the SESMs and the broadband ground motion simulation recipe used in them are still valid at least for crustal earthquakes. Here, we outline this recipe and show the results of validation tests for it.Irikura and Miyake (2001) and Irikura (2004) developed a recipe for simulating strong ground motions from future crustal earthquakes based on a characterization of their source models (Irikura recipe). The result of the characterization is called a characterized source model, where a rectangular fault includes a few rectangular asperities. Each asperity and the background area surrounding the asperities have their own uniform stress drops. The Irikura recipe defines the parameters of the fault and asperities, and how to simulate broadband ground motions from the characterized source model. The recipe for the SESMs was constructed following the Irikura recipe (ERC, 2005). The National Research Institute for Earth Science and Disaster Prevention (NIED) then made simulation codes along this recipe to generate SESMs (Fujiwara et al., 2006; Morikawa et al., 2011). The Subcommittee in 2002 validated a preliminary version of the SESM recipe by comparing simulated and observed ground motions for the 2000 Tottori earthquake. In 2007 and 2008, the Subcommittee carried out detailed validations of the current version of the SESM recipe and the NIED codes using ground motions from the 2005 Fukuoka earthquake. Irikura and Miyake (2011) summarized the latter validations, concluding that the ground motions were successfully simulated as shown in the figure. This indicates that the recipe has enough potential to generate broadband ground motions for scenario hazard assessment in Japan.
Research in seismology and earthquake engineering in Venezuela
Urbina, L.; Grases, J.
1983-01-01
After the July 29, 1967, damaging earthquake (with a moderate magnitude of 6.3) caused widespread damage to the northern coastal area of Venezuela and to the Caracas Valley, the Venezuelan Government decided to establish a Presidential Earthquake Commission. This commission undertook the task of coordinating the efforts to study the after-effects of the earthquake. The July 1967 earthquake claimed numerous lives and caused extensive damage to the capital of Venezuela. In 1968, the U.S Geological Survey conducted a seismological field study in the northern coastal area and in the Caracas Valley of Venezuela. the objective was to study the area that sustained severe, moderate, and no damage to structures. A reported entitled Ground Amplification Studies in Earthquake Damage Areas: The Caracas Earthquake of 1967 documented, for the first time, short-period seismic wave ground-motion amplifications in the Caracas Valley. Figure 1 shows the area of severe damage in the Los Palos Grantes suburb and the correlation with depth of alluvium and the arabic numbers denote the ground amplification factor at each site in the area. the Venezuelan Government initiated many programs to study in detail the damage sustained and to investigate the ongoing construction practices. These actions motivated professionals in the academic, private, and Government sectors to develops further capabilities and self-sufficiency in the fields of engineering and seismology. Allocation of funds was made to assist in training professionals and technicians and in developing new seismological stations and new programs at the national level in earthquake engineering and seismology. A brief description of the ongoing programs in Venezuela is listed below. these programs are being performed by FUNVISIS and by other national organizations listed at the end of this article.
1998-01-01
On STS-89, three Mechanics of Granular Materials (MGM) test cells were subjected to five cycles of compression and relief (left) and three were subjected to shorter displacement cycles that simulate motion during an earthquake (right). In the compression/relief tests, the sand particles rearranged themselves and slightly re-expanded the column during relief. In the short displacement tests, the specimen's resistance to compression decreases, even though the displacement remains the same. The specimens were cycled up to 100 times or until the resistive force was less than 1% that of the previous cycle. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/Marshall Space Flight Center (MSFC)
Graphs of Soil Mechanics Tests in Orbit
NASA Technical Reports Server (NTRS)
1998-01-01
On STS-89, three Mechanics of Granular Materials (MGM) test cells were subjected to five cycles of compression and relief (left) and three were subjected to shorter displacement cycles that simulate motion during an earthquake (right). In the compression/relief tests, the sand particles rearranged themselves and slightly re-expanded the column during relief. In the short displacement tests, the specimen's resistance to compression decreases, even though the displacement remains the same. The specimens were cycled up to 100 times or until the resistive force was less than 1% that of the previous cycle. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/Marshall Space Flight Center (MSFC)
ERIC Educational Resources Information Center
Chang, Pei-Fen; Wang, Dau-Chung
2011-01-01
In May 2008, the worst earthquake in more than three decades struck southwest China, killing more than 80,000 people. The complexity of this earthquake makes it an ideal case study to clarify the intertwined issues of ethics in engineering and to help cultivate critical thinking skills. This paper first explores the need to encourage engineering…
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie
2017-04-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
On simulating large earthquakes by Green's-function addition of smaller earthquakes
NASA Astrophysics Data System (ADS)
Joyner, William B.; Boore, David M.
Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to the ω-squared model with similarity, but not at high frequency. Interestingly, the high-frequency scaling implied by this latter choice of η and κ corresponds to an ω-squared model with constant Moƒ4o—a scaling law proposed by Nuttli, although questioned recently by Haar and others. Simple scaling with κ equal to unity and η equal to the moment ratio would work if the high-frequency spectral decay were ω-1.5 instead of ω-2. Just the required decay is exhibited by the stochastic source model recently proposed by Joynet, if the dislocation-time function is deconvolved out of the spectrum. Simulated motions derived from such source models could be used as subevents rather than recorded motions as is usually done. This strategy is a promising approach to simulation of ground motion from an extended rupture.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Development of optimization-based probabilistic earthquake scenarios for the city of Tehran
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Peyghaleh, E.
2016-01-01
This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.
Navigating Earthquake Physics with High-Resolution Array Back-Projection
NASA Astrophysics Data System (ADS)
Meng, Lingsen
Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.
How to build and teach with QuakeCaster: an earthquake demonstration and exploration tool
Linton, Kelsey; Stein, Ross S.
2015-01-01
QuakeCaster is an interactive, hands-on teaching model that simulates earthquakes and their interactions along a plate-boundary fault. QuakeCaster contains the minimum number of physical processes needed to demonstrate most observable earthquake features. A winch to steadily reel in a line simulates the steady plate tectonic motions far from the plate boundaries. A granite slider in frictional contact with a nonskid rock-like surface simulates a fault at a plate boundary. A rubber band connecting the line to the slider simulates the elastic character of the Earth’s crust. By stacking and unstacking sliders and cranking in the winch, one can see the results of changing the shear stress and the clamping stress on a fault. By placing sliders in series with rubber bands between them, one can simulate the interaction of earthquakes along a fault, such as cascading or toggling shocks. By inserting a load scale into the line, one can measure the stress acting on the fault throughout the earthquake cycle. As observed for real earthquakes, QuakeCaster events are not periodic, time-predictable, or slip-predictable. QuakeCaster produces rare but unreliable “foreshocks.” When fault gouge builds up, the friction goes to zero and fault creep is seen without large quakes. QuakeCaster events produce very small amounts of fault gouge that strongly alter its behavior, resulting in smaller, more frequent shocks as the gouge accumulates. QuakeCaster is designed so that students or audience members can operate it and record its output. With a stopwatch and ruler one can measure and plot the timing, slip distance, and force results of simulated earthquakes. People of all ages can use the QuakeCaster model to explore hypotheses about earthquake occurrence. QuakeCaster takes several days and about $500.00 in materials to build.
Linking interseismic deformation with coseismic slip using dynamic rupture simulations
NASA Astrophysics Data System (ADS)
Yang, H.; He, B.; Weng, H.
2017-12-01
The largest earthquakes on earth occur at subduction zones, sometimes accompanied by devastating tsunamis. Reducing losses from megathrust earthquakes and tsunami demands accurate estimate of rupture scenarios for future earthquakes. Interseismic locking distribution derived from geodetic observations is often used to qualitatively evaluate future earthquake potential. However, how to quantitatively estimate the coseismic slip from the locking distribution remains challenging. Here we derive the coseismic rupture process of the 2012 Mw 7.6 Nicoya, Costa Rica, earthquake from interseismic locking distribution using spontaneous rupture simulation. We construct a three-dimensional elastic medium with a curved fault, which is governed by the linear slip-weakening law. The initial stress on the fault is set based on the build-up stress inferred from locking and the dynamic friction coefficient from fast-speed sliding experiments. Our numerical results of coseismic slip distribution, moment rate function and final earthquake moment are well consistent with those derived from seismic and geodetic observations. Furthermore, we find that the epicentral locations affect rupture scenarios and may lead to various sizes of earthquakes given the heterogeneous stress distribution. In the Nicoya region, less than half of rupture initiation regions where the locking degree is greater than 0.6 can develop into large earthquakes (Mw > 7.2). The results of location-dependent earthquake magnitudes underscore the necessity of conducting a large number of simulations to quantitatively evaluate seismic hazard from the interseismic locking models.
PRISM software—Processing and review interface for strong-motion data
Jones, Jeanne M.; Kalkan, Erol; Stephens, Christopher D.; Ng, Peter
2017-11-28
Rapidly available and accurate ground-motion acceleration time series (seismic recordings) and derived data products are essential to quickly providing scientific and engineering analysis and advice after an earthquake. To meet this need, the U.S. Geological Survey National Strong Motion Project has developed a software package called PRISM (Processing and Review Interface for Strong-Motion data). PRISM automatically processes strong-motion acceleration records, producing compatible acceleration, velocity, and displacement time series; acceleration, velocity, and displacement response spectra; Fourier amplitude spectra; and standard earthquake-intensity measures. PRISM is intended to be used by strong-motion seismic networks, as well as by earthquake engineers and seismologists.
NASA Astrophysics Data System (ADS)
Harbindu, Ashish; Sharma, Mukat Lal; Kamal
2012-04-01
The earthquakes in Uttarkashi (October 20, 1991, M w 6.8) and Chamoli (March 8, 1999, M w 6.4) are among the recent well-documented earthquakes that occurred in the Garhwal region of India and that caused extensive damage as well as loss of life. Using strong-motion data of these two earthquakes, we estimate their source, path, and site parameters. The quality factor ( Q β ) as a function of frequency is derived as Q β ( f) = 140 f 1.018. The site amplification functions are evaluated using the horizontal-to-vertical spectral ratio technique. The ground motions of the Uttarkashi and Chamoli earthquakes are simulated using the stochastic method of Boore (Bull Seismol Soc Am 73:1865-1894, 1983). The estimated source, path, and site parameters are used as input for the simulation. The simulated time histories are generated for a few stations and compared with the observed data. The simulated response spectra at 5% damping are in fair agreement with the observed response spectra for most of the stations over a wide range of frequencies. Residual trends closely match the observed and simulated response spectra. The synthetic data are in rough agreement with the ground-motion attenuation equation available for the Himalayas (Sharma, Bull Seismol Soc Am 98:1063-1069, 1998).
Modeling of earthquake ground motion in the frequency domain
NASA Astrophysics Data System (ADS)
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation model is assessed using data from the SMART-1 array in Taiwan. The interpolation model provides an effective method to estimate ground motion at a site using recordings from stations located up to several kilometers away. Reliable estimates of differential ground motion are restricted to relatively limited ranges of frequencies and inter-station spacings.
Extraction of crustal deformations and oceanic fluctuations from ocean bottom pressures
NASA Astrophysics Data System (ADS)
Ariyoshi, Keisuke; Nagano, Akira; Hasegawa, Takuya; Matsumoto, Hiroyuki; Kido, Motoyuki; Igarashi, Toshihiro; Uchida, Naoki; Iinuma, Takeshi; Yamashita, Yusuke
2017-04-01
It has been well known that megathrust earthquakes such as the 2004 Sumatra-Andaman Earthquake (Mw 9.1) and the 2011 the Pacific Coast of Tohoku Earthquake (Mw 9.0) had devastated the coastal areas in the western of Indonesia and in the north-eastern of Japan, respectively. To mitigate the disaster of those forthcoming megathrust earthquakes along Nankai Trough, the Japanese government has established seafloor networks of cable-linked observatories around Japan: DONET (Dense Oceanfloor Network system for Earthquakes and Tsunamis along the Nankai Trough) and S-net (Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench). The advantage of the cable-linked network is to monitor the propagation process of tsunami and seismic waves as well as seismic activity in real time. DONET contains pressure gauges as well as seismometers, which are expected to detect crustal deformations driven by peeling off subduction plate coupling process. From our simulation results, leveling changes are different sense among the DONET points even in the same science node. On the other hand, oceanic fluctuations such as melting ice masses through the global warming have so large scale as to cause ocean bottom pressure change coherently for all of DONET points especially in the same node. This difference suggests the possibility of extracting crustal deformations component from ocean bottom pressure data by differential of stacking data. However, this operation cannot be applied to local-scale fluctuations related to ocean mesoscale eddies and current fluctuations, which affect ocean bottom pressure through water density changes in the water column (from the sea surface to the bottom). Therefore, we need integral analysis by combining seismology, ocean physics and tsunami engineering so as to decompose into crustal deformation, oceanic fluctuations and instrumental drift, which will bring about high precision data enough to find geophysical phenomena. In this study, we propose a new interpretation of seismic plate coupling around the Tonankai region along the Nankai Trough, and discuss how to detect it by using the DONET data effectively. In the future, we have to extract the crustal deformation component by separating other components such as instrumental drift and oceanic changes as an integral study collaborated by seismology, geodesy, physical oceanography, and mechanical engineering.
NASA Astrophysics Data System (ADS)
Witter, Robert C.; Zhang, Yinglong; Wang, Kelin; Goldfinger, Chris; Priest, George R.; Allan, Jonathan C.
2012-10-01
We test hypothetical tsunami scenarios against a 4,600-year record of sandy deposits in a southern Oregon coastal lake that offer minimum inundation limits for prehistoric Cascadia tsunamis. Tsunami simulations constrain coseismic slip estimates for the southern Cascadia megathrust and contrast with slip deficits implied by earthquake recurrence intervals from turbidite paleoseismology. We model the tsunamigenic seafloor deformation using a three-dimensional elastic dislocation model and test three Cascadia earthquake rupture scenarios: slip partitioned to a splay fault; slip distributed symmetrically on the megathrust; and slip skewed seaward. Numerical tsunami simulations use the hydrodynamic finite element model, SELFE, that solves nonlinear shallow-water wave equations on unstructured grids. Our simulations of the 1700 Cascadia tsunami require >12-13 m of peak slip on the southern Cascadia megathrust offshore southern Oregon. The simulations account for tidal and shoreline variability and must crest the ˜6-m-high lake outlet to satisfy geological evidence of inundation. Accumulating this slip deficit requires ≥360-400 years at the plate convergence rate, exceeding the 330-year span of two earthquake cycles preceding 1700. Predecessors of the 1700 earthquake likely involved >8-9 m of coseismic slip accrued over >260 years. Simple slip budgets constrained by tsunami simulations allow an average of 5.2 m of slip per event for 11 additional earthquakes inferred from the southern Cascadia turbidite record. By comparison, slip deficits inferred from time intervals separating earthquake-triggered turbidites are poor predictors of coseismic slip because they meet geological constraints for only 4 out of 12 (˜33%) Cascadia tsunamis.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Future WGCEP Models and the Need for Earthquake Simulators
NASA Astrophysics Data System (ADS)
Field, E. H.
2008-12-01
The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).
Development of regional liquefaction-induced deformation hazard maps
Rosinski, A.; Knudsen, K.-L.; Wu, J.; Seed, R.B.; Real, C.R.; ,
2004-01-01
This paper describes part of a project to assess the feasibility of producing regional (1:24,000-scale) liquefaction hazard maps that are based-on potential liquefaction-induced deformation. The study area is the central Santa Clara Valley, at the south end of San Francisco Bay in Central California. The information collected and used includes: a) detailed Quaternary geological mapping, b) over 650 geotechnical borings, c) probabilistic earthquake shaking information, and d) ground-water levels. Predictions of strain can be made using either empirical formulations or numerical simulations. In this project lateral spread displacements are estimated and new empirical relations to estimate future volumetric and shear strain are used. Geotechnical boring data to are used to: (a) develop isopach maps showing the thickness of sediment thatis likely to liquefy and deform under earthquake shaking; and (b) assess the variability in engineering properties within and between geologic map units. Preliminary results reveal that late Holocene deposits are likely to experience the greatest liquefaction-induced strains, while Holocene and late Pleistocene deposits are likely to experience significantly less horizontal and vertical strain in future earthquakes. Development of maps based on these analyses is feasible.
1989-10-17
An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.
Experimental and Analytical Seismic Studies of a Four-Span Bridge System with Innovative Materials
NASA Astrophysics Data System (ADS)
Cruz Noguez, Carlos Alonso
As part of a multi-university project utilizing the NSF Network for Earthquake Engineering Simulation (NEES), a quarter-scale model of a four-span bridge incorporating plastic hinges with different advanced materials was tested to failure on the three shake table system at the University of Nevada, Reno (UNR). The bridge was the second test model in a series of three 4-span bridges, with the first model being a conventional reinforced-concrete (RC) structure. The purpose of incorporating advanced materials was to improve the seismic performance of the bridge with respect to two damage indicators: (1) column damage and (2) permanent deformations. The goals of the study presented in this document were to (1) evaluate the seismic performance of a 4-span bridge system incorporating SMA/ECC and built-in rubber pad plastic hinges as well as post-tensioned piers, (2) quantify the relative merit of these advanced materials and details compared to each other and to conventional reinforced concrete plastic hinges, (3) determine the influence of abutment-superstructure interaction on the response, (4) examine the ability of available elaborate analytical modeling techniques to model the performance of advanced materials and details, and (5) conduct an extensive parametric study of different variations of the bridge model to study several important issues in bridge earthquake engineering. The bridge model included six columns, each pair of which utilized a different advanced detail at bottom plastic hinges: shape memory alloys (SMA), special engineered cementitious composites (ECC), elastomeric pads embedded into columns, and post-tensioning tendons. The design of the columns, location of the bents, and selection of the loading protocol were based on pre-test analyses conducted using computer program OpenSees. The bridge model was subjected to two-horizontal components of simulated earthquake records of the 1994 Northridge earthquake. Over 340 channels of data were collected. The test results showed the effectiveness of the advanced materials in reducing damage and permanent displacements. The damage was minimal in plastic hinges with SMA/ECC and those with built-in elastomeric pads. Conventional RC plastic hinges were severely damaged due to spalling of concrete and rupture of the longitudinal and transverse reinforcement. Extensive post-test analytical studies were conducted and it was determined that a computational model of the bridge that included bridge-abutment interaction using OpenSees was able to provide satisfactory estimations of key structural parameters such as superstructure displacements and base shears. The analytical model was also used to conduct parametric studies on single-column and bridge-system response under near-fault ground motions. The effects of vertical excitations and transverse shear-keys at the bridge abutments on the superstructure displacement and column drifts were also explored.
NASA Astrophysics Data System (ADS)
Jalali, Mohammad; Ramazi, Hamidreza
2018-04-01
This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low seismic potential provinces. Seismotectonic properties of very high seismic potential provinces have been also presented.
Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.
2017-12-01
To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number hp160221).
Numerical simulations for active tectonic processes: increasing interoperability and performance
NASA Technical Reports Server (NTRS)
Donnellan, A.; Fox, G.; Rundle, J.; McLeod, D.; Tullis, T.; Grant, L.
2002-01-01
The objective of this project is to produce a system to fully model earthquake-related data. This task develops simulation and analysis tools to study the physics of earthquakes using state-of-the-art modeling.
The California Integrated Seismic Network
NASA Astrophysics Data System (ADS)
Hellweg, M.; Given, D.; Hauksson, E.; Neuhauser, D.; Oppenheimer, D.; Shakal, A.
2007-05-01
The mission of the California Integrated Seismic Network (CISN) is to operate a reliable, modern system to monitor earthquakes throughout the state; to generate and distribute information in real-time for emergency response, for the benefit of public safety, and for loss mitigation; and to collect and archive data for seismological and earthquake engineering research. To meet these needs, the CISN operates data processing and archiving centers, as well as more than 3000 seismic stations. Furthermore, the CISN is actively developing and enhancing its infrastructure, including its automated processing and archival systems. The CISN integrates seismic and strong motion networks operated by the University of California Berkeley (UCB), the California Institute of Technology (Caltech), and the United States Geological Survey (USGS) offices in Menlo Park and Pasadena, as well as the USGS National Strong Motion Program (NSMP), and the California Geological Survey (CGS). The CISN operates two earthquake management centers (the NCEMC and SCEMC) where statewide, real-time earthquake monitoring takes place, and an engineering data center (EDC) for processing strong motion data and making it available in near real-time to the engineering community. These centers employ redundant hardware to minimize disruptions to the earthquake detection and processing systems. At the same time, dual feeds of data from a subset of broadband and strong motion stations are telemetered in real- time directly to both the NCEMC and the SCEMC to ensure the availability of statewide data in the event of a catastrophic failure at one of these two centers. The CISN uses a backbone T1 ring (with automatic backup over the internet) to interconnect the centers and the California Office of Emergency Services. The T1 ring enables real-time exchange of selected waveforms, derived ground motion data, phase arrivals, earthquake parameters, and ShakeMaps. With the goal of operating similar and redundant statewide earthquake processing systems at both real-time EMCs, the CISN is currently adopting and enhancing the database-centric, earthquake processing and analysis software originally developed for the Caltech/USGS Pasadena TriNet project. Earthquake data and waveforms are made available to researchers and to the public in near real-time through the CISN's Northern and Southern California Eathquake Data Centers (NCEDC and SCEDC) and through the USGS Earthquake Notification System (ENS). The CISN partners have developed procedures to automatically exchange strong motion data, both waveforms and peak parameters, for use in ShakeMap and in the rapid engineering reports which are available near real-time through the strong motion EDC.
Simulation of Earthquake-Generated Sea-Surface Deformation
NASA Astrophysics Data System (ADS)
Vogl, Chris; Leveque, Randy
2016-11-01
Earthquake-generated tsunamis can carry with them a powerful, destructive force. One of the most well-known, recent examples is the tsunami generated by the Tohoku earthquake, which was responsible for the nuclear disaster in Fukushima. Tsunami simulation and forecasting, a necessary element of emergency procedure planning and execution, is typically done using the shallow-water equations. A typical initial condition is that using the Okada solution for a homogeneous, elastic half-space. This work focuses on simulating earthquake-generated sea-surface deformations that are more true to the physics of the materials involved. In particular, a water layer is added on top of the half-space that models the seabed. Sea-surface deformations are then simulated using the Clawpack hyperbolic PDE package. Results from considering the water layer both as linearly elastic and as "nearly incompressible" are compared to that of the Okada solution.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul
2017-12-01
This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.
Southern California Earthquake Center (SCEC) Communication, Education and Outreach Program
NASA Astrophysics Data System (ADS)
Benthien, M. L.
2003-12-01
The SCEC Communication, Education, and Outreach Program (CEO) offers student research experiences, web-based education tools, classroom curricula, museum displays, public information brochures, online newsletters, and technical workshops and publications. This year, much progress has been made on the development of the Electronic Encyclopedia of Earthquakes (E3), a collaborative project with CUREE and IRIS. The E3 development system is now fully operational, and 165 entries are in the pipeline. When complete, information and resources for over 500 Earth science and engineering topics will be included, with connections to curricular materials useful for teaching Earth Science, engineering, physics and mathematics. To coordinate activities for the 10-year anniversary of the Northridge Earthquake in 2004 (and beyond), the "Earthquake Country Alliance" is being organized by SCEC CEO to present common messages, to share or promote existing resources, and to develop new activities and products jointly (such as a new version of Putting Down Roots in Earthquake Country). The group includes earthquake science and engineering researchers and practicing professionals, preparedness experts, response and recovery officials, news media representatives, and education specialists. A web portal, http://www.earthquakecountry.info, is being developed established with links to web pages and descriptions of other resources and services that the Alliance members provide. Another ongoing strength of SCEC is the Summer Intern program, which now has a year-round counterpart with students working on IT projects at USC. Since Fall 2002, over 32 students have participated in the program, including 7 students working with scientists throughout SCEC, 17 students involved in the USC "Earthquake Information Technology" intern program, and 7 students involved in CEO projects. These and other activities of the SCEC CEO program will be presented, along with lessons learned during program design and implementation.
NASA Astrophysics Data System (ADS)
Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.
2010-12-01
Three dimensional velocity structure model of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction earthquakes. The model is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural earthquakes. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The model has been revised through numbers of simulations of small to middle earthquakes as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai Earthquake with the 3-D velocity structure model. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source model and newly developed velocity structure model. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small earthquake
NASA Astrophysics Data System (ADS)
Thomas, Marion Y.; Lapusta, Nadia; Noda, Hiroyuki; Avouac, Jean-Philippe
2014-03-01
Physics-based numerical simulations of earthquakes and slow slip, coupled with field observations and laboratory experiments, can, in principle, be used to determine fault properties and potential fault behaviors. Because of the computational cost of simulating inertial wave-mediated effects, their representation is often simplified. The quasi-dynamic (QD) approach approximately accounts for inertial effects through a radiation damping term. We compare QD and fully dynamic (FD) simulations by exploring the long-term behavior of rate-and-state fault models with and without additional weakening during seismic slip. The models incorporate a velocity-strengthening (VS) patch in a velocity-weakening (VW) zone, to consider rupture interaction with a slip-inhibiting heterogeneity. Without additional weakening, the QD and FD approaches generate qualitatively similar slip patterns with quantitative differences, such as slower slip velocities and rupture speeds during earthquakes and more propensity for rupture arrest at the VS patch in the QD cases. Simulations with additional coseismic weakening produce qualitatively different patterns of earthquakes, with near-periodic pulse-like events in the FD simulations and much larger crack-like events accompanied by smaller events in the QD simulations. This is because the FD simulations with additional weakening allow earthquake rupture to propagate at a much lower level of prestress than the QD simulations. The resulting much larger ruptures in the QD simulations are more likely to propagate through the VS patch, unlike for the cases with no additional weakening. Overall, the QD approach should be used with caution, as the QD simulation results could drastically differ from the true response of the physical model considered.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhou, L.; Liu, J.
2013-12-01
Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity
The Alaska earthquake, March 27, 1964: lessons and conclusions
Eckel, Edwin B.
1970-01-01
One of the greatest earthquakes of all time struck south-central Alaska on March 27, 1964. Strong motion lasted longer than for most recorded earthquakes, and more land surface was dislocated, vertically and horizontally, than by any known previous temblor. Never before were so many effects on earth processes and on the works of man available for study by scientists and engineers over so great an area. The seismic vibrations, which directly or indirectly caused most of the damage, were but surface manifestations of a great geologic event-the dislocation of a huge segment of the crust along a deeply buried fault whose nature and even exact location are still subjects for speculation. Not only was the land surface tilted by the great tectonic event beneath it, with resultant seismic sea waves that traversed the entire Pacific, but an enormous mass of land and sea floor moved several tens of feet horizontally toward the Gulf of Alaska. Downslope mass movements of rock, earth, and snow were initiated. Subaqueous slides along lake shores and seacoasts, near-horizontal movements of mobilized soil (“landspreading”), and giant translatory slides in sensitive clay did the most damage and provided the most new knowledge as to the origin, mechanics, and possible means of control or avoidance of such movements. The slopes of most of the deltas that slid in 1964, and that produced destructive local waves, are still as steep or steeper than they were before the earthquake and hence would be unstable or metastable in the event of another great earthquake. Rockslide avalanches provided new evidence that such masses may travel on cushions of compressed air, but a widely held theory that glaciers surge after an earthquake has not been substantiated. Innumerable ground fissures, many of them marked by copious emissions of water, caused much damage in towns and along transportation routes. Vibration also consolidated loose granular materials. In some coastal areas, local subsidence was superimposed on regional tectonic subsidence to heighten the flooding damage. Ground and surface waters were measurably affected by the earthquake, not only in Alaska but throughout the world. Expectably, local geologic conditions largely controlled the extent of structural damage, whether caused directly by seismic vibrations or by secondary effects such as those just described. Intensity was greatest in areas underlain by thick saturated unconsolidated deposits, least on indurated bedrock or permanently frozen ground, and intermediate on coarse well-drained gravel, on morainal deposits, or on moderately indurated sedimentary rocks. Local and even regional geology also controlled the distribution and extent of the earthquake's effects on hydrologic systems. In the conterminous United States, for example, seiches in wells and bodies of surface water were controlled by geologic structures of regional dimension. Devastating as the earthquake was, it had many long-term beneficial effects. Many of these were socioeconomic or engineering in nature; others were of scientific value. Much new and corroborative basic geologic and hydrologic information was accumulated in the course of the earthquake studies, and many new or improved investigative techniques were developed. Chief among these, perhaps, were the recognition that lakes can be used as giant tiltmeters, the refinement of methods for measuring land-level changes by observing displacements of barnacles and other sessile organisms, and the relating of hydrology to seismology by worldwide study of hydroseisms in surface-water bodies and in wells. The geologic and hydrologic lessons learned from studies of the Alaska earthquake also lead directly to better definition of the research needed to further our understanding of earthquakes and of how to avoid or lessen the effects of future ones. Research is needed on the origins and mechanisms of earthquakes, on crustal structure, and on the generation of tsunamis and local waves. Better earthquake-hazard maps, based on improved knowledge of regional geology, fault behavior, and earthquake mechanisms, are needed for the entire country. Their preparation will require the close collaboration of engineers, seismologists, and geologists. Geologic maps of all inhabited places in earthquake-prone parts of the country are also needed by city planners and others, because the direct relationship between local geology and potential earthquake damage is now well understood. Improved and enlarged nets of earthquake-sensing instruments, sited in relation to known geology, are needed, as are many more geodetic and hydrographic measurements. Every large earthquake, wherever located, should be regarded as a full-scale laboratory experiment whose study can give scientific and engineering information unobtainable from any other source. Plans must be made before the event to insure staffing, funding, and coordination of effort for the scientific and engineering study of future earthquakes. Advice of earth scientists and engineers should be used in the decision-making processes involved in reconstruction after any future disastrous earthquake, as was done after the Alaska earthquake. The volume closes with a selected bibliography and a comprehensive index to the entire series of U.S. Geological Survey Professional Papers 541-546. This is the last in a series of six reports that the U.S. Geological Survey published on the results of a comprehensive geologic study that began, as a reconnaissance survey, within 24 hours after the March 27, 1964, Magnitude 9.2 Great Alaska Earthquake and extended, as detailed investigations, through several field seasons. The 1964 Great Alaska earthquake was the largest earthquake in the U.S. since 1700. Professional Paper 546, in 1 part, describes Lessons and Conclusions.
Effect of GNSS receiver carrier phase tracking loops on earthquake monitoring performance
NASA Astrophysics Data System (ADS)
Clare, Adam; Lin, Tao; Lachapelle, Gérard
2017-06-01
This research focuses on the performance of GNSS receiver carrier phase tracking loops for early earthquake monitoring systems. An earthquake was simulated using a hardware simulator and position, velocity and acceleration displacements were obtained to recreate the dynamics of the 2011 Tohoku earthquake. Using a software defined receiver, GSNRx, tracking bandwidths of 5, 10, 15, 20, 30, 40 and 50 Hz along with integration times of 1, 5 and 10 ms were tested. Using the phase lock indicator, an adaptive tracking loop was designed and tested to maximize performance for this application.
Assessment of tsunami hazard for coastal areas of Shandong Province, China
NASA Astrophysics Data System (ADS)
Feng, Xingru; Yin, Baoshu
2017-04-01
Shandong province is located on the east coast of China and has a coastline of about 3100 km. There are only a few tsunami events recorded in the history of Shandong Province, but the tsunami hazard assessment is still necessary as the rapid economic development and increasing population of this area. The objective of this study was to evaluate the potential danger posed by tsunamis for Shandong Province. The numerical simulation method was adopted to assess the tsunami hazard for coastal areas of Shandong Province. The Cornell multi-grid coupled tsunami numerical model (COMCOT) was used and its efficacy was verified by comparison with three historical tsunami events. The simulated maximum tsunami wave height agreed well with the observational data. Based on previous studies and statistical analyses, multiple earthquake scenarios in eight seismic zones were designed, the magnitudes of which were set as the potential maximum values. Then, the tsunamis they induced were simulated using the COMCOT model to investigate their impact on the coastal areas of Shandong Province. The numerical results showed that the maximum tsunami wave height, which was caused by the earthquake scenario located in the sea area of the Mariana Islands, could reach up to 1.39 m off the eastern coast of Weihai city. The tsunamis from the seismic zones of the Bohai Sea, Okinawa Trough, and Manila Trench could also reach heights of >1 m in some areas, meaning that earthquakes in these zones should not be ignored. The inundation hazard was distributed primarily in some northern coastal areas near Yantai and southeastern coastal areas of Shandong Peninsula. When considering both the magnitude and arrival time of tsunamis, it is suggested that greater attention be paid to earthquakes that occur in the Bohai Sea. In conclusion, the tsunami hazard facing the coastal area of Shandong Province is not very serious; however, disasters could occur if such events coincided with spring tides or other extreme oceanic conditions. The results of this study will be useful for the design of coastal engineering projects and the establishment of a tsunami warning system for Shandong Province.
NASA Astrophysics Data System (ADS)
2002-09-01
Contents include the following: Deep Electromagnetic Images of Seismogenic Zone of the Chi-Chi (Taiwan) Earthquake; New Techniques for Stress-Forecasting Earthquakes; Aspects of Characteristics of Near-Fault Ground Motions of the 1999 Chi-Chi (Taiwan) Earthquake; Liquefaction Damage and Related Remediation in Wufeng after the Chi-Chi Earthquake; Fines Content Effects on Liquefaction Potential Evaluation for Sites Liquefied during Chi-Chi Earthquake 1999; Damage Investigation and Liquefaction Potential Analysis of Gravelly Soil; Dynamic Characteristics of Soils in Yuan-Lin Liquefaction Area; A Preliminary Study of Earthquake Building Damage and Life Loss Due to the Chi-Chi Earthquake; Statistical Analyses of Relation between Mortality and Building Type in the 1999 Chi-Chi Earthquake; Development of an After Earthquake Disaster Shelter Evaluation Model; Posttraumatic Stress Reactions in Children and Adolescents One Year after the 1999 Taiwan Chi-Chi Earthquake; Changes or Not is the Question: the Meaning of Posttraumatic Stress Reactions One Year after the Taiwan Chi-Chi Earthquake.
An interview with Karl Steinbrugge
Spall, H.
1985-01-01
He has served on numerous national and international committees on earthquake hazards, and he is now a consulting structural engineer, specializing in earthquake hazard evaluation. At the present moment he is chairman of an independent panel of the Federal Emergency Management Agency that is reviewing the National Earthquake Hazards Reduction Program. Henry Spall recently asked Steinbrugge some questions about his long career.
A Study on the Relationship between Disaster and Spectral Intensity
NASA Astrophysics Data System (ADS)
Yeh, Yeong-Tein; Kao, Ching-Yun
2010-05-01
Nowadays, the structural environment is becoming so complicated that an index, which can better assess earthquake damage than the originally defined intensity scale and PGA, is needed. Housner [1] suggested that spectral intensity (SI) can be a risk index of an earthquake. After Housner some earthquake engineers keep on exploring different period range of SI and its application [2-5]. The study of Matsumura [4] shows that SI is a better measure of earthquake intensity for a wide range of frequencies with a good correlation with damage than peak ground acceleration (adequate to structures with shorter natural period) and peak ground velocity (adequate to structures with longer natural period). Recently, Jean [6] investigated earthquake intensity attenuation law and site effect of strong ground motion using earthquake records in Taiwan area. Their results show that SI is a better earthquake damage index than PGA. This study enhanced the SI concept proposed by Jean [6]. The spectral intensity was separated into three periods, short period (acceleration controlled period), medium period (velocity controlled period), and long period (displacement controlled period). The average spectral intensity of short period, medium period, and long period can be an earthquake damage index of low-rise buildings, buildings of medium height, and high-rise buildings. Since average value of a certain data is meaningful when the data has a small variance, the start and end points of the three periods are calculated by statistical method so that the data at each period has minimum variance. Finally, the relationship between disaster and spectral intensity of 1999 Taiwan Chi-Chi earthquake was investigated in this study. [1] Housner, G. W. (1952). "Spectrum intensity of strong-motion earthquakes," in Proc. Sym. Earthq. Blast Eeff. on Stru., EERI, U.C.L.A.. [2] Hidalgo.P. and R. W.Clough (1974). "Earthquake simulator study of a reinforced concrete frame," Report UCB/EERC-74/13, EERC, University of California, Berkeley. [3] Kappos, A. J (1991). "Analytical prediction of the collpase earthquake for R. C. buildings: suggested methodology," Earthq. Eng. Stru. Dyn., 20, 2, pp. 167-176. [4] Matsumura, K. (1992). "On the intensity measure of strong motions related to structural failures," in Proceeding of 10 WCEE, 1, pp. 375-380. [5] Martinez-Rueda, J. E (1998). "Scaling procedure for natural accelerograms based on a system of spectrum intensity scales," Earthq. Spec., 14, 1,. [6] Jean, W. Y., Y. W. Chang, K. L. Wen, and C. H. Loh (2006). "Early estimation of seismic hazard for strong earthquakes in Taiwan," Natural Hazards, vol. 37, pp. 39-53.
NASA Astrophysics Data System (ADS)
Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.
2016-12-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
A Benchmarking setup for Coupled Earthquake Cycle - Dynamic Rupture - Tsunami Simulations
NASA Astrophysics Data System (ADS)
Behrens, Joern; Bader, Michael; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Vater, Stefan; Wollherr, Stephanie; van Zelst, Iris
2017-04-01
We developed a simulation framework for coupled physics-based earthquake rupture generation with tsunami propagation and inundation on a simplified subduction zone system for the project "Advanced Simulation of Coupled Earthquake and Tsunami Events" (ASCETE, funded by the Volkswagen Foundation). Here, we present a benchmarking setup that can be used for complex rupture models. The workflow begins with a 2D seismo-thermo-mechanical earthquake cycle model representing long term deformation along a planar, shallowly dipping subduction zone interface. Slip instabilities that approximate earthquakes arise spontaneously along the subduction zone interface in this model. The absolute stress field and material properties for a single slip event are used as initial conditions for a dynamic earthquake rupture model.The rupture simulation is performed with SeisSol, which uses an ADER discontinuous Galerkin discretization scheme with an unstructured tetrahedral mesh. The seafloor displacements resulting from this rupture are transferred to the tsunami model with a simple coastal run-up profile. An adaptive mesh discretizing the shallow water equations with a Runge-Kutta discontinuous Galerkin (RKDG) scheme subsequently allows for an accurate and efficient representation of the tsunami evolution and inundation at the coast. This workflow allows for evaluation of how the rupture behavior affects the hydrodynamic wave propagation and coastal inundation. We present coupled results for differing earthquake scenarios. Examples include megathrust only ruptures versus ruptures with splay fault branching off the megathrust near the surface. Coupling to the tsunami simulation component is performed either dynamically (time dependent) or statically, resulting in differing tsunami wave and inundation behavior. The simplified topographical setup allows for systematic parameter studies and reproducible physical studies.
The QuakeSim Project: Numerical Simulations for Active Tectonic Processes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry
2004-01-01
In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.
Recent achievements in real-time computational seismology in Taiwan
NASA Astrophysics Data System (ADS)
Lee, S.; Liang, W.; Huang, B.
2012-12-01
Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information < 120 sec; ROS completes a 3D simulation < 3 minutes). All of these computational results are posted on the internet in real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).
Strong ground motion prediction using virtual earthquakes.
Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C
2014-01-24
Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.
Ottawa Sand for Mechanics of Granular Materials (MGM) Experiment
NASA Technical Reports Server (NTRS)
2000-01-01
What appear to be boulders fresh from a tumble down a mountain are really grains of Ottawa sand, a standard material used in civil engineering tests and also used in the Mechanics of Granular Materials (MGM) experiment. The craggy surface shows how sand grans have faces that can cause friction as they roll and slide against each other, or even causing sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM uses the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. These images are from an Electron Spectroscopy for Chemical Analysis (ESCA) study conducted by Dr. Binayak Panda of IITRI for Marshall Space Flight Center (MSFC). (Credit: NASA/MSFC)
Earthquakes in Mississippi and vicinity 1811-2010
Dart, Richard L.; Bograd, Michael B.E.
2011-01-01
This map summarizes two centuries of earthquake activity in Mississippi. Work on the Mississippi map was done in collaboration with the Mississippi Department of Environmental Quality, Office of Geology. The earthquake data plotted on the map are from several sources: the Mississippi Department of Environmental Quality, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Arkansas Geological Survey. In addition to earthquake locations, other materials include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Mississippi and parts of adjacent States. Mississippi has undergone a number of felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Mississippi and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.
NASA Astrophysics Data System (ADS)
Filiatrault, Andre; Sullivan, Timothy
2014-08-01
With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major knowledge gaps that will need to be filled by future research. Furthermore, considering recent trends in earthquake engineering, the paper explores how performance-based seismic design might be conceived for nonstructural components, drawing on recent developments made in the field of seismic design and hinting at the specific considerations required for nonstructural components.
Tsunami hazard assessments with consideration of uncertain earthquakes characteristics
NASA Astrophysics Data System (ADS)
Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.
2017-12-01
The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for the 2014 Chilean earthquake. Results show that leading wave measurements fall within the tsunami sample space. At later times, however, there are mismatches between measured data and the simulated results, suggesting that other sources of uncertainty are as relevant as the uncertainty of the studied earthquake characteristics.
,
1999-01-01
This report assesses the status, needs, and associated costs of seismic monitoring in the United States. It sets down the requirement for an effective, national seismic monitoring strategy and an advanced system linking national, regional, and urban monitoring networks. Modernized seismic monitoring can provide alerts of imminent strong earthquake shaking; rapid assessment of distribution and severity of earthquake shaking (for use in emergency response); warnings of a possible tsunami from an offshore earthquake; warnings of volcanic eruptions; information for correctly characterizing earthquake hazards and for improving building codes; and data on response of buildings and structures during earthquakes, for safe, cost-effective design, engineering, and construction practices in earthquake-prone regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofmann, R.B.
1995-09-01
Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
NASA Astrophysics Data System (ADS)
D'Alessio, M. A.
2010-12-01
A discussion of P- and S-waves seems an ubiquitous part of studying earthquakes in the classroom. Textbooks from middle school through university level typically define the differences between the waves and illustrate the sense of motion. While many students successfully memorize the differences between wave types (often utilizing the first letter as a memory aide), textbooks rarely give tangible examples of how the two waves would "feel" to a person sitting on the ground. One reason for introducing the wave types is to explain how to calculate earthquake epicenters using seismograms and travel time charts -- very abstract representations of earthquakes. Even when the skill is mastered using paper-and-pencil activities or one of the excellent online interactive versions, locating an epicenter simply does not excite many of our students because it evokes little emotional impact, even in students located in earthquake-prone areas. Despite these limitations, huge numbers of students are mandated to complete the task. At the K-12 level, California requires that all students be able to locate earthquake epicenters in Grade 6; in New York, the skill is a required part of the Regent's Examination. Recent innovations in earthquake early warning systems around the globe give us the opportunity to address the same content standard, but with substantially more emotional impact on students. I outline a lesson about earthquakes focused on earthquake early warning systems. The introductory activities include video clips of actual earthquakes and emphasize the differences between the way P- and S-waves feel when they arrive (P arrives first, but is weaker). I include an introduction to the principle behind earthquake early warning (including a summary of possible uses of a few seconds warning about strong shaking) and show examples from Japan. Students go outdoors to simulate P-waves, S-waves, and occupants of two different cities who are talking to one another on cell phones. The culminating activity is for students to "design" an early warning system that will protect their school from nearby earthquakes. The better they design the system, the safer they will be. Each team of students receives a map of faults in the area and possible sites for real-time seismometer installation. Given a fixed budget, they must select the best sites for detecting a likely earthquake. After selecting their locations, teams face-off two-by-two in a tournament of simulated earthquakes. We created animations of a few simulated earthquakes for our institution and have plans to build a web-based version that will allow others to customize the location to their own location and facilitate the competition between teams. Earthquake early warning is both cutting-edge and has huge societal benefits. Instead of teaching our students how to locate epicenters after an earthquake has occurred, we can teach the same content standards while showing them that earthquake science can really save lives.
Assessment of Structural Resistance of building 4862 to Earthquake and Tornado Forces [SEC 1 and 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
METCALF, I.L.
1999-12-06
This report presents the results of work done for Hanford Engineering Laboratory under contract Y213-544-12662. LATA performed an assessment of building 4862 resistance to earthquake and tornado forces.
Wong, I.; Olig, S.; Dober, M.; Silva, W.; Wright, D.; Thomas, P.; Gregor, N.; Sanford, A.; Lin, K.-W.; Love, D.
2004-01-01
These maps are not intended to be a substitute for site-specific studies for engineering design nor to replace standard maps commonly referenced in building codes. Rather, we hope that these maps will be used as a guide by government agencies; the engineering, urban planning, emergency preparedness, and response communities; and the general public as part of an overall program to reduce earthquake risk and losses in New Mexico.
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Earthquake Simulator Finds Tremor Triggers
Johnson, Paul
2018-01-16
Using a novel device that simulates earthquakes in a laboratory setting, a Los Alamos researcher has found that seismic waves-the sounds radiated from earthquakes-can induce earthquake aftershocks, often long after a quake has subsided. The research provides insight into how earthquakes may be triggered and how they recur. Los Alamos researcher Paul Johnson and colleague Chris Marone at Penn State have discovered how wave energy can be stored in certain types of granular materials-like the type found along certain fault lines across the globe-and how this stored energy can suddenly be released as an earthquake when hit by relatively small seismic waves far beyond the traditional âaftershock zoneâ of a main quake. Perhaps most surprising, researchers have found that the release of energy can occur minutes, hours, or even days after the sound waves pass; the cause of the delay remains a tantalizing mystery.
Sequential Analysis: Hypothesis Testing and Changepoint Detection
2014-07-11
it is necessary to estimate in situ the geographical coordinates and other parameters of earthquakes . The standard sensor equipment of a three...components. When an earthquake arises, the sensors begin to record several types of seismic waves (body and surface waves), among which the more important...machines and to increased safety norms. Many structures to be monitored, e.g., civil engineering structures subject to wind and earthquakes , aircraft
Validation of ground-motion simulations for historical events using SDoF systems
Galasso, C.; Zareian, F.; Iervolino, I.; Graves, R.W.
2012-01-01
The study presented in this paper is among the first in a series of studies toward the engineering validation of the hybrid broadband ground‐motion simulation methodology by Graves and Pitarka (2010). This paper provides a statistical comparison between seismic demands of single degree of freedom (SDoF) systems subjected to past events using simulations and actual recordings. A number of SDoF systems are selected considering the following: (1) 16 oscillation periods between 0.1 and 6 s; (2) elastic case and four nonlinearity levels, from mildly inelastic to severely inelastic systems; and (3) two hysteretic behaviors, in particular, nondegrading–nonevolutionary and degrading–evolutionary. Demand spectra are derived in terms of peak and cyclic response, as well as their statistics for four historical earthquakes: 1979 Mw 6.5 Imperial Valley, 1989 Mw 6.8 Loma Prieta, 1992 Mw 7.2 Landers, and 1994 Mw 6.7 Northridge.
Simulation and monitoring tools to protect disaster management facilities against earthquakes
NASA Astrophysics Data System (ADS)
Saito, Taiki
2017-10-01
The earthquakes that hit Kumamoto Prefecture in Japan on April 14 and 16, 2016 severely damaged over 180,000 houses, including over 8,000 that were completely destroyed and others that were partially damaged according to the Cabinet Office's report as of November 14, 2016 [1]. Following these earthquakes, other parts of the world have been struck by earthquakes including Italy and New Zealand as well as the central part of Tottori Prefecture in October, where the earthquake-induced collapse of buildings has led to severe damage and casualties. The earthquakes in Kumamoto Prefecture, in fact, damaged various disaster management facilities including Uto City Hall, which significantly hindered the city's evacuation and recovery operations. One of the most crucial issues in times of disaster is securing the functions of disaster management facilities such as city halls, hospitals and fire stations. To address this issue, seismic simulations are conducted on the East and the West buildings of Toyohashi City Hall using the analysis tool developed by the author, STERA_3D, with the data of the ground motion waveform prediction for the Nankai Trough earthquake provided by the Ministry of Land, Infrastructure, Transport and Tourism. As the result, it was found that the buildings have sufficient earthquake resistance. It turned out, however, that the west building is at risk for wall cracks or ceiling panel's collapse while in the east building, people would not be able to stand through the strong quakes of 7 on the seismic intensity scale and cabinets not secured to the floors or walls would fall over. Additionally, three IT strong-motion seismometers were installed in the city hall to continuously monitor vibrations. Every five minutes, the vibration data obtained by the seismometers are sent to the computers in Toyohashi University of Technology via the Internet for the analysis tools to run simulations in the cloud. If an earthquake strikes, it is able to use the results of the simulations to assess whether it is safe to continue using the buildings. There is a plan to implement more measures against earthquakes, for example by having additional monitoring locations including fire stations and evacuation facilities, and installing a dedicated line for disaster prevention. Accumulating real-time data in the cloud can also improve the accuracy of the simulations.
The 1999 Izmit, Turkey, earthquake: A 3D dynamic stress transfer model of intraearthquake triggering
Harris, R.A.; Dolan, J.F.; Hartleb, R.; Day, S.M.
2002-01-01
Before the August 1999 Izmit (Kocaeli), Turkey, earthquake, theoretical studies of earthquake ruptures and geological observations had provided estimates of how far an earthquake might jump to get to a neighboring fault. Both numerical simulations and geological observations suggested that 5 km might be the upper limit if there were no transfer faults. The Izmit earthquake appears to have followed these expectations. It did not jump across any step-over wider than 5 km and was instead stopped by a narrower step-over at its eastern end and possibly by a stress shadow caused by a historic large earthquake at its western end. Our 3D spontaneous rupture simulations of the 1999 Izmit earthquake provide two new insights: (1) the west- to east-striking fault segments of this part of the North Anatolian fault are oriented so as to be low-stress faults and (2) the easternmost segment involved in the August 1999 rupture may be dipping. An interesting feature of the Izmit earthquake is that a 5-km-long gap in surface rupture and an adjacent 25° restraining bend in the fault zone did not stop the earthquake. The latter observation is a warning that significant fault bends in strike-slip faults may not arrest future earthquakes.
Use of QuakeSim and UAVSAR for Earthquake Damage Mitigation and Response
NASA Technical Reports Server (NTRS)
Donnellan, A.; Parker, J. W.; Bawden, G.; Hensley, S.
2009-01-01
Spaceborne, airborne, and modeling and simulation techniques are being applied to earthquake risk assessment and response for mitigation from this natural disaster. QuakeSim is a web-based portal for modeling interseismic strain accumulation using paleoseismic and crustal deformation data. The models are used for understanding strain accumulation and release from earthquakes as well as stress transfer to neighboring faults. Simulations of the fault system can be used for understanding the likelihood and patterns of earthquakes as well as the likelihood of large aftershocks from events. UAVSAR is an airborne L-band InSAR system for collecting crustal deformation data. QuakeSim, UAVSAR, and DESDynI (following launch) can be used for monitoring earthquakes, the associated rupture and damage, and postseismic motions for prediction of aftershock locations.
Numerical Modeling and Forecasting of Strong Sumatra Earthquakes
NASA Astrophysics Data System (ADS)
Xing, H. L.; Yin, C.
2007-12-01
ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.
NASA Astrophysics Data System (ADS)
Yolsal-Çevikbilen, Seda; Taymaz, Tuncay
2012-04-01
We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop values (Δσ < 30 bars) for all earthquakes implying typically interplate seismic activity in the region. Further, results of numerical simulations verified that damaging historical tsunamis along the Hellenic subduction zone are able to threaten especially the coastal plains of Crete and Rhodes islands, SW Turkey, Cyprus, Levantine, and Nile Delta-Egypt regions. Thus, we tentatively recommend that special care should be considered in the evaluation of the tsunami risk assessment of the Eastern Mediterranean region for future studies.
Earthquakes in Arkansas and vicinity 1699-2010
Dart, Richard L.; Ausbrooks, Scott M.
2011-01-01
This map summarizes approximately 300 years of earthquake activity in Arkansas. It is one in a series of similar State earthquake history maps. Work on the Arkansas map was done in collaboration with the Arkansas Geological Survey. The earthquake data plotted on the map are from several sources: the Arkansas Geological Survey, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Mississippi Department of Environmental Quality. In addition to earthquake locations, other materials presented include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Arkansas and parts of adjacent states. Arkansas has undergone a number of significant felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Arkansas and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.
Machanics of Granular Materials (MGM) Investigator
NASA Technical Reports Server (NTRS)
2000-01-01
Key persornel in the Mechanics of Granular Materials (MGM) experiment include Khalid Alshibli, project scientist at NASA's Marshall Space Flight Center (MSFC). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: MSFC).
1996-09-18
Astronaut Jay Apt installs Mechanics of Granular Materials (MGM0 test cell on STS-79. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/John Space Center).
1998-01-25
Astronaut James Reilly uses a laptop computer monitor the Mechanics of Granular Materials (MGM) experiment during STS-89. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/Marshall Space Flight Center (MSFC)
2000-07-01
Mechanics of Granular Materials (MGM) flight hardware takes two twin double locker assemblies in the Space Shuttle middeck or the Spacehab module. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/MSFC).
1996-09-18
Astronaut Carl Walz installs Mechanics of Granular Materials (MGM) test cell on STS-79. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/John Space Center
Installing Mechanics of Granular Materials (MGM) Experiment Test Cell
NASA Technical Reports Server (NTRS)
1996-01-01
Astronaut Carl Walz installs Mechanics of Granular Materials (MGM) test cell on STS-79. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/John Space Center
Installing Mechanics of Granular Materials (MGM) experiment Test Cell
NASA Technical Reports Server (NTRS)
1996-01-01
Astronaut Jay Apt installs Mechanics of Granular Materials (MGM0 test cell on STS-79. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/John Space Center).
Mechanics of Granular Materials labeled hardware
NASA Technical Reports Server (NTRS)
2000-01-01
Mechanics of Granular Materials (MGM) flight hardware takes two twin double locker assemblies in the Space Shuttle middeck or the Spacehab module. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/MSFC).
2000-07-01
Key persornel in the Mechanics of Granular Materials (MGM) experiment include Khalid Alshibli, project scientist at NASA's Marshall Space Flight Center (MSFC). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: MSFC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prowell, I.; Elgamal, A.; Romanowitz, H.
Demand parameters for turbines, such as tower moment demand, are primarily driven by wind excitation and dynamics associated with operation. For that purpose, computational simulation platforms have been developed, such as FAST, maintained by the National Renewable Energy Laboratory (NREL). For seismically active regions, building codes also require the consideration of earthquake loading. Historically, it has been common to use simple building code approaches to estimate the structural demand from earthquake shaking, as an independent loading scenario. Currently, International Electrotechnical Commission (IEC) design requirements include the consideration of earthquake shaking while the turbine is operating. Numerical and analytical tools usedmore » to consider earthquake loads for buildings and other static civil structures are not well suited for modeling simultaneous wind and earthquake excitation in conjunction with operational dynamics. Through the addition of seismic loading capabilities to FAST, it is possible to simulate earthquake shaking in the time domain, which allows consideration of non-linear effects such as structural nonlinearities, aerodynamic hysteresis, control system influence, and transients. This paper presents a FAST model of a modern 900-kW wind turbine, which is calibrated based on field vibration measurements. With this calibrated model, both coupled and uncoupled simulations are conducted looking at the structural demand for the turbine tower. Response is compared under the conditions of normal operation and potential emergency shutdown due the earthquake induced vibrations. The results highlight the availability of a numerical tool for conducting such studies, and provide insights into the combined wind-earthquake loading mechanism.« less
Using the USGS Seismic Risk Web Application to estimate aftershock damage
McGowan, Sean M.; Luco, Nicolas
2014-01-01
The U.S. Geological Survey (USGS) Engineering Risk Assessment Project has developed the Seismic Risk Web Application to combine earthquake hazard and structural fragility information in order to calculate the risk of earthquake damage to structures. Enabling users to incorporate their own hazard and fragility information into the calculations will make it possible to quantify (in near real-time) the risk of additional damage to structures caused by aftershocks following significant earthquakes. Results can quickly be shared with stakeholders to illustrate the impact of elevated ground motion hazard and earthquake-compromised structural integrity on the risk of damage during a short-term, post-earthquake time horizon.
NASA Astrophysics Data System (ADS)
Chang, Pei-Fen; Wang, Dau-Chung
2011-08-01
In May 2008, the worst earthquake in more than three decades struck southwest China, killing more than 80,000 people. The complexity of this earthquake makes it an ideal case study to clarify the intertwined issues of ethics in engineering and to help cultivate critical thinking skills. This paper first explores the need to encourage engineering ethics within a cross-cultural context. Next, it presents a systematic model for designing an engineering ethics curriculum based on moral development theory and ethic dilemma analysis. Quantitative and qualitative data from students' oral and written work were collected and analysed to determine directions for improvement. The paper also presents results of an assessment of this interdisciplinary engineering ethics course. This investigation of a disaster is limited strictly to engineering ethics education; it is not intended to assign blame, but rather to spark debate about ethical issues.
Geotechnical effects of the 2015 magnitude 7.8 Gorkha, Nepal, earthquake and aftershocks
Moss, Robb E. S.; Thompson, Eric M.; Kieffer, D Scott; Tiwari, Binod; Hashash, Youssef M A; Acharya, Indra; Adhikari, Basanta; Asimaki, Domniki; Clahan, Kevin B.; Collins, Brian D.; Dahal, Sachindra; Jibson, Randall W.; Khadka, Diwakar; Macdonald, Amy; Madugo, Chris L M; Mason, H Benjamin; Pehlivan, Menzer; Rayamajhi, Deepak; Uprety, Sital
2015-01-01
This article summarizes the geotechnical effects of the 25 April 2015 M 7.8 Gorkha, Nepal, earthquake and aftershocks, as documented by a reconnaissance team that undertook a broad engineering and scientific assessment of the damage and collected perishable data for future analysis. Brief descriptions are provided of ground shaking, surface fault rupture, landsliding, soil failure, and infrastructure performance. The goal of this reconnaissance effort, led by Geotechnical Extreme Events Reconnaissance, is to learn from earthquakes and mitigate hazards in future earthquakes.
1987-09-01
Geological Survey, MS977, Menlo Park , CA 94025, USA. , TURKISH NATIONAL COMMITTEE FOR EARTHQUAKE ENGINEERING THIRTEENTH REGIONAL SEMINALR ON EARTQUAKE...this case the conditional probability P(E/F1) will also depend in general on t . A simple example of a case of this type was developed by the present...These studies took Into cosideration all the available date eoncerning the dynamic characteristics of different type * of buildings. A first attempt was
A sophisticated simulation for the fracture behavior of concrete material using XFEM
NASA Astrophysics Data System (ADS)
Zhai, Changhai; Wang, Xiaomin; Kong, Jingchang; Li, Shuang; Xie, Lili
2017-10-01
The development of a powerful numerical model to simulate the fracture behavior of concrete material has long been one of the dominant research areas in earthquake engineering. A reliable model should be able to adequately represent the discontinuous characteristics of cracks and simulate various failure behaviors under complicated loading conditions. In this paper, a numerical formulation, which incorporates a sophisticated rigid-plastic interface constitutive model coupling cohesion softening, contact, friction and shear dilatation into the XFEM, is proposed to describe various crack behaviors of concrete material. An effective numerical integration scheme for accurately assembling the contribution to the weak form on both sides of the discontinuity is introduced. The effectiveness of the proposed method has been assessed by simulating several well-known experimental tests. It is concluded that the numerical method can successfully capture the crack paths and accurately predict the fracture behavior of concrete structures. The influence of mode-II parameters on the mixed-mode fracture behavior is further investigated to better determine these parameters.
33 CFR 222.4 - Reporting earthquake effects.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., DEPARTMENT OF DEFENSE ENGINEERING AND DESIGN § 222.4 Reporting earthquake effects. (a) Purpose. This... structural integrity and operational adequacy of major Civil Works structures following the occurrence of...) Applicability. This regulation is applicable to all field operating agencies having Civil Works responsibilities...
DOT National Transportation Integrated Search
2008-12-01
Shortly after the 1994 Northridge Earthquake, Caltrans geotechnical engineers charged with developing site-specific : response spectra for high priority California bridges initiated a research project aimed at broadening their perspective : from simp...
Estimating the residual axial load capacity of flexure-dominated reinforced concrete bridge columns.
DOT National Transportation Integrated Search
2014-08-01
Extreme events such as earthquakes have the potential to damage hundreds, if not thousands, of bridges on a : transportation network. Following an earthquake, the damaged bridges are inspected by engineers sequentially to : decide whether or not to c...
1989-10-17
Sand boil or sand volcano measuring 2 m (6.6 ft.) in length erupted in median of Interstate Highway 80 west of the Bay Bridge toll plaza when ground shaking transformed loose water-saturated deposit of subsurface sand into a sand-water slurry (liquefaction) in the October 17, 1989, Loma Prieta earthquake. Vented sand contains marine-shell fragments. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: J.C. Tinsley, U.S. Geological Survey)
Bi-directional volcano-earthquake interaction at Mauna Loa Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Walter, T. R.; Amelung, F.
2004-12-01
At Mauna Loa volcano, Hawaii, large-magnitude earthquakes occur mostly at the west flank (Kona area), at the southeast flank (Hilea area), and at the east flank (Kaoiki area). Eruptions at Mauna Loa occur mostly at the summit region and along fissures at the southwest rift zone (SWRZ), or at the northeast rift zone (NERZ). Although historic earthquakes and eruptions at these zones appear to correlate in space and time, the mechanisms and implications of an eruption-earthquake interaction was not cleared. Our analysis of available factual data reveals the highly statistical significance of eruption-earthquake pairs, with a random probability of 5-to-15 percent. We clarify this correlation with the help of elastic stress-field models, where (i) we simulate earthquakes and calculate the resulting normal stress change at volcanic active zones of Mauna Loa, and (ii) we simulate intrusions in Mauna Loa and calculate the Coulomb stress change at the active fault zones. Our models suggest that Hilea earthquakes encourage dike intrusion in the SWRZ, Kona earthquakes encourage dike intrusion at the summit and in the SWRZ, and Kaoiki earthquakes encourage dike intrusion in the NERZ. Moreover, a dike in the SWRZ encourages earthquakes in the Hilea and Kona areas. A dike in the NERZ may encourage and discourage earthquakes in the Hilea and Kaoiki areas. The modeled stress change patterns coincide remarkably with the patterns of several historic eruption-earthquake pairs, clarifying the mechanisms of bi-directional volcano-earthquake interaction for Mauna Loa. The results imply that at Mauna Loa volcanic activity influences the timing and location of earthquakes, and that earthquakes influence the timing, location and the volume of eruptions. In combination with near real-time geodetic and seismic monitoring, these findings may improve volcano-tectonic risk assessment.
Kavazanjian, Edward; Gutierrez, Angel
2017-10-01
A large scale centrifuge test of a geomembrane-lined landfill subject to waste settlement and seismic loading was conducted to help validate a numerical model for performance based design of geomembrane liner systems. The test was conducted using the 240g-ton centrifuge at the University of California at Davis under the U.S. National Science Foundation Network for Earthquake Engineering Simulation Research (NEESR) program. A 0.05mm thin film membrane was used to model the liner. The waste was modeled using a peat-sand mixture. The side slope membrane was underlain by lubricated low density polyethylene to maximize the difference between the interface shear strength on the top and bottom of the geomembrane and the induced tension in it. Instrumentation included thin film strain gages to monitor geomembrane strains and accelerometers to monitor seismic excitation. The model was subjected to an input design motion intended to simulate strong ground motion from the 1994 Hyogo-ken Nanbu earthquake. Results indicate that downdrag waste settlement and seismic loading together, and possibly each phenomenon individually, can induce potentially damaging tensile strains in geomembrane liners. The data collected from this test is publically available and can be used to validate numerical models for the performance of geomembrane liner systems. Published by Elsevier Ltd.
Evaluation of seismic performance of reinforced concrete (RC) buildings under near-field earthquakes
NASA Astrophysics Data System (ADS)
Moniri, Hassan
2017-03-01
Near-field ground motions are significantly severely affected on seismic response of structure compared with far-field ground motions, and the reason is that the near-source forward directivity ground motions contain pulse-long periods. Therefore, the cumulative effects of far-fault records are minor. The damage and collapse of engineering structures observed in the last decades' earthquakes show the potential of damage in existing structures under near-field ground motions. One important subject studied by earthquake engineers as part of a performance-based approach is the determination of demand and collapse capacity under near-field earthquake. Different methods for evaluating seismic structural performance have been suggested along with and as part of the development of performance-based earthquake engineering. This study investigated the results of illustrious characteristics of near-fault ground motions on the seismic response of reinforced concrete (RC) structures, by the use of Incremental Nonlinear Dynamic Analysis (IDA) method. Due to the fact that various ground motions result in different intensity-versus-response plots, this analysis is done again under various ground motions in order to achieve significant statistical averages. The OpenSees software was used to conduct nonlinear structural evaluations. Numerical modelling showed that near-source outcomes cause most of the seismic energy from the rupture to arrive in a single coherent long-period pulse of motion and permanent ground displacements. Finally, a vulnerability of RC building can be evaluated against pulse-like near-fault ground motions effects.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
Supercomputing meets seismology in earthquake exhibit
Blackwell, Matt; Rodger, Arthur; Kennedy, Tom
2018-02-14
When the California Academy of Sciences created the "Earthquake: Evidence of a Restless Planet" exhibit, they called on Lawrence Livermore to help combine seismic research with the latest data-driven visualization techniques. The outcome is a series of striking visualizations of earthquakes, tsunamis and tectonic plate evolution. Seismic-wave research is a core competency at Livermore. While most often associated with earthquakes, the research has many other applications of national interest, such as nuclear explosion monitoring, explosion forensics, energy exploration, and seismic acoustics. For the Academy effort, Livermore researchers simulated the San Andreas and Hayward fault events at high resolutions. Such calculations require significant computational resources. To simulate the 1906 earthquake, for instance, visualizing 125 seconds of ground motion required over 1 billion grid points, 10,000 time steps, and 7.5 hours of processor time on 2,048 cores of Livermore's Sierra machine.
Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale
NASA Astrophysics Data System (ADS)
Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.
2016-05-01
The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.
Real-Time Earthquake Monitoring with Spatio-Temporal Fields
NASA Astrophysics Data System (ADS)
Whittier, J. C.; Nittel, S.; Subasinghe, I.
2017-10-01
With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.
coordinates research in support of the PEER mission in performance-based earthquake engineering. The broad system dynamic response; assessment of the performance of the structural and nonstructural systems ; consequences in terms of casualties, capital costs, and post-earthquake functionality; and decision-making to
Harp, E.L.; Noble, M.A.
1993-01-01
Investigations of earthquakes world wide show that rock falls are the most abundant type of landslide that is triggered by earthquakes. An engineering classification originally used in tunnel design, known as the rock mass quality designation (Q), was modified for use in rating the susceptibility of rock slopes to seismically-induced failure. Analysis of rock-fall concentrations and Q-values for the 1980 earthquake sequence near Mammoth Lakes, California, defines a well-constrained upper bound that shows the number of rock falls per site decreases rapidly with increasing Q. Because of the similarities of lithology and slope between the Eastern Sierra Nevada Range near Mammoth Lakes and the Wasatch Front near Salt Lake City, Utah, the probabilities derived from analysis of the Mammoth Lakes region were used to predict rock-fall probabilities for rock slopes near Salt Lake City in response to a magnitude 6.0 earthquake. These predicted probabilities were then used to generalize zones of rock-fall susceptibility. -from Authors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, A.
In this project we developed GEN_SRF4 a computer program for generating kinematic rupture models, compatible with the SRF format, using Irikura and Miyake (2011) asperity-based earthquake rupture model (IM2011, hereafter). IM2011, also known as Irkura’s recipe, has been widely used to model and simulate ground motion from earthquakes in Japan. An essential part of the method is its kinematic rupture generation technique, which is based on a deterministic rupture asperity modeling approach. The source model simplicity and efficiency of IM2011 at reproducing ground motion from earthquakes recorded in Japan makes it attractive to developers and users of the Southern Californiamore » Earthquake Center Broadband Platform (SCEC BB platform). Besides writing the code the objective of our study was to test the transportability of IM2011 to broadband simulation methods used by the SCEC BB platform. Here we test it using the Graves and Pitarka (2010) method, implemented in the platform. We performed broadband (0.1- -10 Hz) ground motion simulations for a M6.7 scenario earthquake using rupture models produced with both GEN_SRF4 and rupture generator of Graves and Pitarka (2016), (GP2016 hereafter). In the simulations we used the same Green’s functions, and same high frequency approach for calculating the low-frequency and high-frequency parts of ground motion, respectively.« less
Crowdsourced earthquake early warning.
Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L
2015-04-01
Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.
Crowdsourced earthquake early warning
Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.
2015-01-01
Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167
Celebi, M.
2004-01-01
The recorded responses of an Anchorage, Alaska, building during four significant earthquakes that occurred in 2002 are studied. Two earthquakes, including the 3 November 2002 M7.9 Denali fault earthquake, with epicenters approximately 275 km from the building, generated long trains of long-period (>1 s) surface waves. The other two smaller earthquakes occurred at subcrustal depths practically beneath Anchorage and produced higher frequency motions. These two pairs of earthquakes have different impacts on the response of the building. Higher modes are more pronounced in the building response during the smaller nearby events. The building responses indicate that the close-coupling of translational and torsional modes causes a significant beating effect. It is also possible that there is some resonance occurring due to the site frequency being close to the structural frequency. Identification of dynamic characteristics and behavior of buildings can provide important lessons for future earthquake-resistant designs and retrofit of existing buildings. ?? 2004, Earthquake Engineering Research Institute.
Towards coupled earthquake dynamic rupture and tsunami simulations: The 2011 Tohoku earthquake.
NASA Astrophysics Data System (ADS)
Galvez, Percy; van Dinther, Ylona
2016-04-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given an unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds suggesting two rupture fronts, possibly due to slip reactivation caused by frictional melting and thermal fluid pressurization effects. We created a 3D dynamic rupture model to reproduce this rupture reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops (Galvez et al, 2015) . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The seismograms agree roughly with seismic records along the coast of Japan. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The simulated sea floor displacement reaches 8-10 meters of uplift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge uplift, we ran tsunami simulations with the slip reactivation model and plug the sea floor displacements into GeoClaw (Finite element code for tsunami simulations, George and LeVeque, 2006). Our recent results compare well with the water height at the tsunami DART buoys 21401, 21413, 21418 and 21419 and show the potential using fully dynamic rupture results for tsunami studies for earthquake-tsunami scenarios.
NASA Astrophysics Data System (ADS)
Bydlon, S. A.; Dunham, E. M.
2016-12-01
Recent increases in seismic activity in historically quiescent areas such as Oklahoma, Texas, and Arkansas, including large, potentially induced events such as the 2011 Mw 5.6 Prague, OK, earthquake, have spurred the need for investigation into expected ground motions associated with these seismic sources. The neoteric nature of this seismicity increase corresponds to a scarcity of ground motion recordings within 50 km of earthquakes Mw 3.0 and greater, with increasing scarcity at larger magnitudes. Gathering additional near-source ground motion data will help better constraints on regional ground motion prediction equations (GMPEs) and will happen over time, but this leaves open the possibility of damaging earthquakes occurring before potential ground shaking and seismic hazard in these areas are properly understood. To aid the effort of constraining near-source GMPEs associated with induced seismicity, we integrate synthetic ground motion data from simulated earthquakes into the process. Using the dynamic rupture and seismic wave propagation code waveqlab3d, we perform verification and validation exercises intended to establish confidence in simulated ground motions for use in constraining GMPEs. We verify the accuracy of our ground motion simulator by performing the PEER/SCEC layer-over-halfspace comparison problem LOH.1 Validation exercises to ensure that we are synthesizing realistic ground motion data include comparisons to recorded ground motions for specific earthquakes in target areas of Oklahoma between Mw 3.0 and 4.0. Using a 3D velocity structure that includes a 1D structure with additional small-scale heterogeneity, the properties of which are based on well-log data from Oklahoma, we perform ground motion simulations of small (Mw 3.0 - 4.0) earthquakes using point moment tensor sources. We use the resulting synthetic ground motion data to develop GMPEs for small earthquakes in Oklahoma. Preliminary results indicate that ground motions can be amplified if the source is located in the shallow, sedimentary sequence compared to the basement. Source depth could therefore be an important variable to define explicitly in GMPEs instead of being incorporated into traditional distance metrics. Future work will include the addition of dynamic sources to develop GMPEs for large earthquakes.
Feeling and Understanding Plate Tectonics - How can We attract Museum Visitors Attention?
NASA Astrophysics Data System (ADS)
Simon, Gilla; Apel, Michael
2017-04-01
Earthquakes, volcano eruptions and other natural hazards are commonly paid attention to, if news about disastrous events reach us. The mission of an Earth Science or Natural History Museum, however, goes beyond explaining the causes of natural disasters, but should also present science history and cutting edge research. Since dealing with a subject, especially with one, which seems to be in the abstract, is more effective, we realised two new projects where our visitors can feel and understand plate tectonics in a more exciting way. In 2015 we installed an earthquake simulator in our permanent exhibition to allow our visitors the physical experience of an earthquake. Because of static restrictions the simulator is housed in a container outside the building where it can be visited as a booked program upon prior reservation or by joining public tours on Sundays and special occasions. The simulation of six real earthquakes in two spatial directions is accompanied by a movie presenting facts about the earthquake itself (e.g. location, magnitude, damage and victims), but also general information about plate tectonics. This standard program takes about 20 minutes. During an educational program, however, not only the simulator is visited, but also the permanent exhibition, where the guide can focus on different aspects and then might choose specific earthquakes and information blocs in the simulator. In addition workshops with experiments are offered for school classes and other groups. This allows us to offer an individual program fitting to the visitor group. In 2016 we converted an old movie room to a state of the art media room. In cooperation with Media Informatics students we developed a quiz for three different levels and various themes like earthquakes, volcanoes, history and plate tectonics in general. Starting the quiz, a virtual earthquake destroys a building which will be reconstructed if the participants answer multiple choice questions correctly. Though, the rebuilding of the house is faster if a group of participants plays together. A first statistic evaluation of the media room shows that the quiz is greatly accepted by the visitors: The quiz is played on an average six times per hour and abortion rate is very low with less than 10%.
Extreme Magnitude Earthquakes and their Economical Consequences
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.
2011-12-01
The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.
Reconnaissance Report, Section 205 Chattooga River Trion, Georgia, Chattooga County
1991-07-01
magnitude, mb, of 7.5, at a distance of about 118 km, in the New Madrid source zone. The earthquake motions estimated to occur at Barkley from an...4: Liquefaction Susceptibility Evaluation and Post- Earthquake Strength Determination Volume 5: Stability Evaluation of Geotechnical Structures The...contributions from ORN. Mssrs. Ronald E. Wahl of Soil and Rock Mechanics Division, Richard S. Olsen, and Dr. M. E. Hynes of the Earthquake Engineering and
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
NASA Astrophysics Data System (ADS)
Lapusta, N.
2011-12-01
Studying earthquake source processes is a multidisciplinary endeavor involving a number of subjects, from geophysics to engineering. As a solid mechanician interested in understanding earthquakes through physics-based computational modeling and comparison with observations, I need to educate and attract students from diverse areas. My CAREER award has provided the crucial support for the initiation of this effort. Applying for the award made me to go through careful initial planning in consultation with my colleagues and administration from two divisions, an important component of the eventual success of my path to tenure. Then, the long-term support directed at my program as a whole - and not a specific year-long task or subject area - allowed for the flexibility required for a start-up of a multidisciplinary undertaking. My research is directed towards formulating realistic fault models that incorporate state-of-the-art experimental studies, field observations, and analytical models. The goal is to compare the model response - in terms of long-term fault behavior that includes both sequences of simulated earthquakes and aseismic phenomena - with observations, to identify appropriate constitutive laws and parameter ranges. CAREER funding has enabled my group to develop a sophisticated 3D modeling approach that we have used to understand patterns of seismic and aseismic fault slip on the Sunda megathrust in Sumatra, investigate the effect of variable hydraulic properties on fault behavior, with application to Chi-Chi and Tohoku earthquake, create a model of the Parkfield segment of the San Andreas fault that reproduces both long-term and short-term features of the M6 earthquake sequence there, and design experiments with laboratory earthquakes, among several other studies. A critical ingredient in this research program has been the fully integrated educational component that allowed me, on the one hand, to expose students from different backgrounds to the multidisciplinary knowledge required for research in my group and, on the other hand, to communicate the field insights to a broader community. Newly developed course on Dynamic Fracture and Frictional Faulting has combined geophysical and engineering knowledge at the forefront of current research activities relevant to earthquake studies and involved students in these activities through team-based course projects. The course attracts students from more than ten disciplines and received a student rating of 4.8/5 this past academic year. In addition, the course on Continuum Mechanics was enriched with geophysical references and examples. My group has also been visiting physics classrooms in a neighboring public school that serve mostly underrepresented minorities. The visits were beneficial not only to the high school students but also for graduate students and postdocs in my group, who got experience in presenting their field in a way accessible for the general public. Overall, the NSF CAREER award program through the Geosciences Directorate (NSF official Eva E. Zanzerkia) has significantly facilitated my development as a researcher and educator and should be either maintained or expanded.
Advanced Simulation of Coupled Earthquake and Tsunami Events
NASA Astrophysics Data System (ADS)
Behrens, Joern
2013-04-01
Tsunami-Earthquakes represent natural catastrophes threatening lives and well-being of societies in a solitary and unexpected extreme event as tragically demonstrated in Sumatra (2004), Samoa (2009), Chile (2010), or Japan (2011). Both phenomena are consequences of the complex system of interactions of tectonic stress, fracture mechanics, rock friction, rupture dynamics, fault geometry, ocean bathymetry, and coastline geometry. The ASCETE project forms an interdisciplinary research consortium that couples the most advanced simulation technologies for earthquake rupture dynamics and tsunami propagation to understand the fundamental conditions of tsunami generation. We report on the latest research results in physics-based dynamic rupture and tsunami wave propagation simulation, using unstructured and adaptive meshes with continuous and discontinuous Galerkin discretization approaches. Coupling both simulation tools - the physics-based dynamic rupture simulation and the hydrodynamic tsunami wave propagation - will give us the possibility to conduct highly realistic studies of the interaction of rupture dynamics and tsunami impact characteristics.
Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes
NASA Astrophysics Data System (ADS)
Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.
2017-12-01
It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enables the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Lastly, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.
Laboratory generated M -6 earthquakes
McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.
2014-01-01
We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.
Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.
It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enablesmore » the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Finally, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.« less
Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes
Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.
2017-11-09
It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enablesmore » the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Finally, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.« less
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Technical guidelines for the implementation of the Advanced National Seismic System
Committee, ANSS Technical Integration
2002-01-01
The Advanced National Seismic System (ANSS) is a major national initiative led by the US Geological Survey that serves the needs of the earthquake monitoring, engineering, and research communities as well as national, state, and local governments, emergency response organizations, and the general public. Legislation authorizing the ANSS was passed in 2000, and low levels of funding for planning and initial purchases of new seismic instrumentation have been appropriated beginning in FY2000. When fully operational, the ANSS will be an advanced monitoring system (modern digital seismographs and accelerographs, communications networks, data collection and processing centers, and well-trained personnel) distributed across the United States that operates with high performance standards, gathers critical technical data, and effectively provides timely and reliable earthquake products, information, and services to meet the Nation’s needs. The ANSS will automatically broadcast timely and authoritative products describing the occurrence of earthquakes, earthquake source properties, the distribution of ground shaking, and, where feasible, broadcast early warnings and alerts for the onset of strong ground shaking. Most importantly, the ANSS will provide earthquake data, derived products, and information to the public, emergency responders, officials, engineers, educators, researchers, and other ANSS partners rapidly and in forms that are useful for their needs.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Premonitory slip and tidal triggering of earthquakes
Lockner, D.A.; Beeler, N.M.
1999-01-01
We have conducted a series of laboratory simulations of earthquakes using granite cylinders containing precut bare fault surfaces at 50 MPa confining pressure. Axial shortening rates between 10-4 and 10-6 mm/s were imposed to simulate tectonic loading. Average loading rate was then modulated by the addition of a small-amplitude sine wave to simulate periodic loading due to Earth tides or other sources. The period of the modulating signal ranged from 10 to 10,000 s. For each combination of amplitude and period of the modulating signal, multiple stick-slip events were recorded to determine the degree of correlation between the timing of simulated earthquakes and the imposed periodic loading function. Over the range of parameters studied, the degree of correlation of earthquakes was most sensitive to the amplitude of the periodic loading, with weaker dependence on the period of oscillations and the average loading rate. Accelerating premonitory slip was observed in these experiments and is a controlling factor in determining the conditions under which correlated events occur. In fact, some form of delayed failure is necessary to produce the observed correlations between simulated earthquake timing and characteristics of the periodic loading function. The transition from strongly correlated to weakly correlated model earthquake populations occurred when the amplitude of the periodic loading was approximately 0.05 to 0.1 MPa shear stress (0.03 to 0.06 MPa Coulomb failure function). Lower-amplitude oscillations produced progressively lower correlation levels. Correlations between static stress increases and earthquake aftershocks are found to degrade at similar stress levels. Typical stress variations due to Earth tides are only 0.001 to 0.004 MPa, so that the lack of correlation between Earth tides and earthquakes is also consistent with our findings. A simple extrapolation of our results suggests that approximately 1% of midcrustal earthquakes should be correlated with Earth tides. Triggered seismicity has been reported resulting from the passage of surface waves excited by the Landers earthquake. These transient waves had measured amplitudes in excess of 0.1 MPa at frequencies of 0.05 to 0.2 Hz in regions of notable seismicity increase. Similar stress oscillations in our laboratory experiments produced strongly correlated stick-slip events. We suggest that seemingly inconsistent natural observations of triggered seismicity and absence of tidal triggering indicate that failure is amplitude and frequency dependent. This is the expected result if, as in our laboratory experiments, the rheology of the Earth's crust permits delayed failure.
ERIC Educational Resources Information Center
Baytiyeh, Hoda; Naja, Mohamad K.
2014-01-01
Due to the high market demands for professional engineers in the Arab oil-producing countries, the appetite of Middle Eastern students for high-paying jobs and challenging careers in engineering has sharply increased. As a result, engineering programmes are providing opportunities for more students to enroll on engineering courses through lenient…
Earthquake Shaking - Finding the "Hot Spots"
Field, Edward; Jones, Lucile; Jordan, Tom; Benthien, Mark; Wald, Lisa
2001-01-01
A new Southern California Earthquake Center study has quantified how local geologic conditions affect the shaking experienced in an earthquake. The important geologic factors at a site are softness of the rock or soil near the surface and thickness of the sediments above hard bedrock. Even when these 'site effects' are taken into account, however, each earthquake exhibits unique 'hotspots' of anomalously strong shaking. Better predictions of strong ground shaking will therefore require additional geologic data and more comprehensive computer simulations of individual earthquakes.
The 1906 earthquake and a century of progress in understanding earthquakes and their hazards
Zoback, M.L.
2006-01-01
The 18 April 1906 San Francisco earthquake killed nearly 3000 people and left 225,000 residents homeless. Three days after the earthquake, an eight-person Earthquake Investigation Commission composed of 25 geologists, seismologists, geodesists, biologists and engineers, as well as some 300 others started work under the supervision of Andrew Lawson to collect and document physical phenomena related to the quake . On 31 May 1906, the commission published a preliminary 17-page report titled "The Report of the State Earthquake Investigation Commission". The report included the bulk of the geological and morphological descriptions of the faulting, detailed reports on shaking intensity, as well as an impressive atlas of 40 oversized maps and folios. Nearly 100 years after its publication, the Commission Report remains a model for post-earthquake investigations. Because the diverse data sets were so complete and carefully documented, researchers continue to apply modern analysis techniques to learn from the 1906 earthquake. While the earthquake marked a seminal event in the history of California, it served as impetus for the birth of modern earthquake science in the United States.
Seismicity of the Earth 1900-2007
Tarr, Arthur C.; Villaseñor, Antonio; Furlong, Kevin P.; Rhea, Susan; Benz, Harley M.
2010-01-01
This map illustrates more than one century of global seismicity in the context of global plate tectonics and the Earth's physiography. Primarily designed for use by earth scientists and engineers interested in earthquake hazards of the 20th and early 21st centuries, this map provides a comprehensive overview of strong earthquakes since 1900. The map clearly identifies the location of the 'great' earthquakes (M8.0 and larger) and the rupture area, if known, of the M8.3 or larger earthquakes. The earthquake symbols are scaled proportional to the moment magnitude and therefore to the area of faulting, thus providing a better understanding of the relative sizes and distribution of earthquakes in the magnitude range 5.5 to 9.5. Plotting the known rupture area of the largest earthquakes also provides a better appreciation of the extent of some of the most famous and damaging earthquakes in modern history. All earthquakes shown on the map were carefully relocated using a standard earth reference model and standardized location procedures, thereby eliminating gross errors and biases in locations of historically important earthquakes that are often found in numerous seismicity catalogs.
Simulation Based Earthquake Forecasting with RSQSim
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.
2016-12-01
We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.
Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach
Jaiswal, Kishor; Wald, David J.; Hearne, Mike
2009-01-01
We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would appear quasiperiodic, while at other times, the events can appear more Poissonian. Hence a given paleoseismic or instrumental record may not reflect the long-term seismicity of a fault, which has important implications for hazard assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, A. J.
This is the final report for United States Geological Survey (USGS) National Earthquake Hazard Reduction Program (NEHRP) Project 08HQGR0022, entitled “Quantifying Uncertainties in Ground Motion Simulations for Scenario Earthquakes on the HaywardRodgers Creek Fault System Using the USGS 3D Seismic Velocity Model and Realistic Pseudodynamics Ruptures”. Work for this project involved three-dimensional (3D) simulations of ground motions for Hayward Fault (HF) earthquakes. We modeled moderate events on the HF and used them to evaluate the USGS 3D model of the San Francisco Bay Area. We also contributed to ground motions modeling effort for a large suite of scenario earthquakes onmore » the HF. Results were presented at conferences (see appendix) and in one peer-reviewed publication (Aagaard et al., 2010).« less
The structure and elasticity of phase B silicates under high pressure by first principles simulation
NASA Astrophysics Data System (ADS)
Liu, Lei; Yi, Li; Liu, Hong; Li, Ying; Zhuang, Chun-Qiang; Yang, Long-Xing; Liu, Gui-Ping
2018-04-01
Not Available Project supported by the Science Fund from the Key Laboratory of Earthquake Prediction, Institute of Earthquake Science, China Earthquake Administration (Grant No. 2016IES010104) and the National Natural Science Foundation of China (Grant Nos. 41174071, 41273073, 41373060, and 41573121).
ERIC Educational Resources Information Center
Haddad, David Elias
2014-01-01
Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that…
Crowdsourced earthquake early warning
Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.
2015-01-01
Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.
NASA Astrophysics Data System (ADS)
Viens, L.; Miyake, H.; Koketsu, K.
2016-12-01
Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.
NASA Astrophysics Data System (ADS)
Aochi, Hideo; Burnol, André
2018-05-01
The source mechanism of the M L 4.0 25 April 2016 Lacq earthquake (Aquitaine Basin, South-West France) is analyzed from the available public data and discussed with respect to the geometry of the nearby Lacq gas field. It is one of the biggest earthquakes in the area in the past few decades of gas extraction and the biggest after the end of gas exploitation in 2013. The routinely obtained location shows its hypocenter position inside the gas reservoir. We first analyze its focal mechanism through regional broad-band seismograms recorded in a radius of about 50 km epicentral distances and obtain EW running normal faulting above the reservoir. While the solution is stable using regional data only, we observe a large discrepancy between the recorded data on nearby station URDF and the forward modeling up to 1 Hz. We then look for the best epicenter position through performing wave propagation simulations and constraining the potential source area by the peak ground velocity (PGV). The resulting epicentral position is a few to several km away to the north or south direction with respect to station URDF such that the simulated particle motions are consistent with the observation. The initial motion of the seismograms shows that the epicenter position in the north from URDF is preferable, indicating the north-east of the Lacq reservoir. This study is an application of full waveform simulations and characterization of near-field ground motion in terms of an engineering factor such as PGV. The finally obtained solution gives a moment magnitude of M w 3.9 and the best focal depth of 4 km, which corresponds to the crust above the reservoir rather than its interior. This position is consistent with the tendency of Coulomb stress change due to a compaction at 5 km depth in the crust. Therefore, this earthquake can be interpreted as a relaxation of the shallow crust due to a deeper gas reservoir compaction so that the occurrence of similar events cannot be excluded in the near future. It would be necessary to continue monitoring such local induced seismicity in order to better understand the reservoir/overburden behavior and better assess the local seismic hazard even after the end of gas exploitation.
Suitability of rapid energy magnitude determinations for emergency response purposes
NASA Astrophysics Data System (ADS)
Di Giacomo, Domenico; Parolai, Stefano; Bormann, Peter; Grosser, Helmut; Saul, Joachim; Wang, Rongjiang; Zschau, Jochen
2010-01-01
It is common practice in the seismological community to use, especially for large earthquakes, the moment magnitude Mw as a unique magnitude parameter to evaluate the earthquake's damage potential. However, as a static measure of earthquake size, Mw does not provide direct information about the released seismic wave energy and its high frequency content, which is the more interesting information both for engineering purposes and for a rapid assessment of the earthquake's shaking potential. Therefore, we recommend to provide to disaster management organizations besides Mw also sufficiently accurate energy magnitude determinations as soon as possible after large earthquakes. We developed and extensively tested a rapid method for calculating the energy magnitude Me within about 10-15 min after an earthquake's occurrence. The method is based on pre-calculated spectral amplitude decay functions obtained from numerical simulations of Green's functions. After empirical validation, the procedure has been applied offline to a large data set of 767 shallow earthquakes that have been grouped according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). The suitability of the proposed approach is discussed by comparing our rapid Me estimates with Mw published by GCMT as well as with Mw and Me reported by the USGS. Mw is on average slightly larger than our Me for all types of mechanisms. No clear dependence on source mechanism is observed for our Me estimates. In contrast, Me from the USGS is generally larger than Mw for strike-slip earthquakes and generally smaller for the other source types. For ~67 per cent of the event data set our Me differs <= +/-0.3 magnitude units (m.u.) from the respective Me values published by the USGS. However, larger discrepancies (up to 0.8 m.u.) may occur for strike-slip events. A reason of that may be the overcorrection of the energy flux applied by the USGS for this type of earthquakes. We follow the original definition of magnitude scales, which does not apply a priori mechanism corrections to measured amplitudes, also since reliable fault-plane solutions are hardly available within 10-15 min after the earthquake origin time. Notable is that our uncorrected Me data show a better linear correlation and less scatter with respect to Mw than Me of the USGS. Finally, by analysing the recordings of representative recent pairs of strong and great earthquakes, we emphasize the importance of combining Mw and Me in the rapid characterization of the seismic source. They are related to different aspects of the source and may differ occasionally even more than 1 m.u. This highlights the usefulness and importance of providing these two magnitude estimates together for a better assessment of an earthquake's shaking potential and/or tsunamigenic potential.
Pore-fluid migration and the timing of the 2005 M8.7 Nias earthquake
Hughes, K.L.H.; Masterlark, Timothy; Mooney, W.D.
2011-01-01
Two great earthquakes have occurred recently along the Sunda Trench, the 2004 M9.2 Sumatra-Andaman earthquake and the 2005 M8.7 Nias earthquake. These earthquakes ruptured over 1600 km of adjacent crust within 3 mo of each other. We quantitatively present poroelastic deformation analyses suggesting that postseismic fluid flow and recovery induced by the Sumatra-Andaman earthquake advanced the timing of the Nias earthquake. Simple back-slip simulations indicate that the megapascal (MPa)-scale pore-pressure recovery is equivalent to 7 yr of interseismic Coulomb stress accumulation near the Nias earthquake hypocenter, implying that pore-pressure recovery of the Sumatra-Andaman earthquake advanced the timing of the Nias earthquake by ~7 yr. That is, in the absence of postseismic pore-pressure recovery, we predict that the Nias earthquake would have occurred in 2011 instead of 2005. ?? 2011 Geological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riffault, Jeremy; Dempsey, David; Karra, Satish
The goal of hydraulic stimulation is to increase formation permeability in the near vicinity of a well. However, there remain technical challenges around measuring the outcome of this operation. During two Enhanced Geothermal System (EGS) stimulations in South Australia, Paralana in 2011 and Habanero in 2003, extensive catalogs of microseismicity were recovered. It is often assumed that shear failure of existing fractures is the main mechanism behind both the induced earthquakes and any permeability enhancement. This underpins a common notion, that the seismically active volume is also the stimulated reservoir. In this paper, we compute the density of earthquake hypocentersmore » and provide evidence that, under certain conditions, this spatiotemporal quantity is a reasonable proxy for pore pressure increase. We then apply an inverse modeling approach that uses the earthquake observations and a modified reservoir simulator to estimate the parameters of a permeability evolution relation. The regime implied by the data indicates that most permeability enhancement occurred very near to the wellbore and was not coincident with the bulk of the seismicity, whose volume was about two orders of magnitude larger. Thus, we conclude that, in some cases, it is possible for permeability enhancement and induced seismicity to be decoupled, in which case the seismically active volume is a poor indicator of the stimulated reservoir. Our results raise serious questions about the effectiveness of hydroshearing as a stimulation mechanism in EGS. Finally, this study extends our understanding of the complex processes linking earthquakes, fluid pressure, and permeability in both natural and engineered settings.« less
Riffault, Jeremy; Dempsey, David; Karra, Satish; ...
2018-06-21
The goal of hydraulic stimulation is to increase formation permeability in the near vicinity of a well. However, there remain technical challenges around measuring the outcome of this operation. During two Enhanced Geothermal System (EGS) stimulations in South Australia, Paralana in 2011 and Habanero in 2003, extensive catalogs of microseismicity were recovered. It is often assumed that shear failure of existing fractures is the main mechanism behind both the induced earthquakes and any permeability enhancement. This underpins a common notion, that the seismically active volume is also the stimulated reservoir. In this paper, we compute the density of earthquake hypocentersmore » and provide evidence that, under certain conditions, this spatiotemporal quantity is a reasonable proxy for pore pressure increase. We then apply an inverse modeling approach that uses the earthquake observations and a modified reservoir simulator to estimate the parameters of a permeability evolution relation. The regime implied by the data indicates that most permeability enhancement occurred very near to the wellbore and was not coincident with the bulk of the seismicity, whose volume was about two orders of magnitude larger. Thus, we conclude that, in some cases, it is possible for permeability enhancement and induced seismicity to be decoupled, in which case the seismically active volume is a poor indicator of the stimulated reservoir. Our results raise serious questions about the effectiveness of hydroshearing as a stimulation mechanism in EGS. Finally, this study extends our understanding of the complex processes linking earthquakes, fluid pressure, and permeability in both natural and engineered settings.« less
Math Machines: Using Actuators in Physics Classes
NASA Astrophysics Data System (ADS)
Thomas, Frederick J.; Chaney, Robert A.; Gruesbeck, Marta
2018-01-01
Probeware (sensors combined with data-analysis software) is a well-established part of physics education. In engineering and technology, sensors are frequently paired with actuators—motors, heaters, buzzers, valves, color displays, medical dosing systems, and other devices that are activated by electrical signals to produce intentional physical change. This article describes how a 20-year project aimed at better integration of the STEM disciplines (science, technology, engineering and mathematics) uses brief actuator activities in physics instruction. Math Machines "actionware" includes software and hardware that convert virtually any free-form, time-dependent algebraic function into the dynamic actions of a stepper motor, servo motor, or RGB (red, green, blue) color mixer. With wheels and a platform, the stepper motor becomes LACI, a programmable vehicle. Adding a low-power laser module turns the servo motor into a programmable Pointer. Adding a gear and platform can transform the Pointer into an earthquake simulator.
Benz, H.M.; Smith, R.B.
1988-01-01
The two-dimensional seismic response of the Salt Lake valley to near- and far-field earthquakes has been investigated from simulations of vertically incident plane waves and from normal-faulting earthquakes generated on the basin-bounding Wasatch fault. The plane-wave simulations were compared with observed site amplifications in the Salt Lake valley, based on seismic recordings from nuclear explosions in southern Nevada, that show 10 times greater amplification with the basin than measured values on hard-rock sites. Synthetic seismograms suggest that in the frequency band 0.3 to 1.5 Hz at least one-half the site amplitication can be attributed to the impedance contrast between the basin sediments and higher velocity basement rocks. -from Authors
NASA Astrophysics Data System (ADS)
Guo, B.
2017-12-01
Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.
From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)
NASA Astrophysics Data System (ADS)
Jordan, T. H.
2009-12-01
Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.
NASA Astrophysics Data System (ADS)
OpršAl, Ivo; FäH, Donat; Mai, P. Martin; Giardini, Domenico
2005-04-01
The Basel earthquake of 18 October 1356 is considered one of the most serious earthquakes in Europe in recent centuries (I0 = IX, M ≈ 6.5-6.9). In this paper we present ground motion simulations for earthquake scenarios for the city of Basel and its vicinity. The numerical modeling combines the finite extent pseudodynamic and kinematic source models with complex local structure in a two-step hybrid three-dimensional (3-D) finite difference (FD) method. The synthetic seismograms are accurate in the frequency band 0-2.2 Hz. The 3-D FD is a linear explicit displacement formulation using an irregular rectangular grid including topography. The finite extent rupture model is adjacent to the free surface because the fault has been recognized through trenching on the Reinach fault. We test two source models reminiscent of past earthquakes (the 1999 Athens and the 1989 Loma Prieta earthquake) to represent Mw ≈ 5.9 and Mw ≈ 6.5 events that occur approximately to the south of Basel. To compare the effect of the same wave field arriving at the site from other directions, we considered the same sources placed east and west of the city. The local structural model is determined from the area's recently established P and S wave velocity structure and includes topography. The selected earthquake scenarios show strong ground motion amplification with respect to a bedrock site, which is in contrast to previous 2-D simulations for the same area. In particular, we found that the edge effects from the 3-D structural model depend strongly on the position of the earthquake source within the modeling domain.
Middle school students' earthquake content and preparedness knowledge - A mixed method study
NASA Astrophysics Data System (ADS)
Henson, Harvey, Jr.
The purpose of this study was to assess the effect of earthquake instruction on students' earthquake content and preparedness for earthquakes. This study used an innovative direct instruction on earthquake science content and concepts with an inquiry-based group activity on earthquake safety followed by an earthquake simulation and preparedness video to help middle school students understand and prepare for the regional seismic threat. A convenience sample of 384 sixth and seventh grade students at two small middle schools in southern Illinois was used in this study. Qualitative information was gathered using open-ended survey questions, classroom observations, and semi-structured interviews. Quantitative data were collected using a 21 item content questionnaire administered to test students' General Earthquake Knowledge, Local Earthquake Knowledge, and Earthquake Preparedness Knowledge before and after instruction. A pre-test and post-test survey Likert scale with 21 items was used to collect students' perceptions and attitudes. Qualitative data analysis included quantification of student responses to the open-ended questions and thematic analysis of observation notes and interview transcripts. Quantitative datasets were analyzed using descriptive and inferential statistical methods, including t tests to evaluate the differences in means scores between paired groups before and after interventions and one-way analysis of variance (ANOVA) to test for differences between mean scores of the comparison groups. Significant mean differences between groups were further examined using a Dunnett's C post hoc statistical analysis. Integration and interpretation of the qualitative and quantitative results of the study revealed a significant increase in general, local and preparedness earthquake knowledge among middle school students after the interventions. The findings specifically indicated that these students felt most aware and prepared for an earthquake after an intervention that consisted of an inquiry-based group discussion on safety, earthquake content presentation and earthquake simulation video presentation on preparedness. Variations of the intervention, including no intervention, were not as effective in significantly increasing students' conceptual learning of earthquake knowledge.
Structural control of the upper plate on the down-dip segmentation of subduction dynamics
NASA Astrophysics Data System (ADS)
Shi, Q.; Barbot, S.; Karato, S. I.; Shibazaki, B.; Matsuzawa, T.; Tapponnier, P.
2017-12-01
The geodetic and seismic discoveries of slow earthquakes in subduction zones have provided the observational evidence for the existence of the transition between megathrust earthquakes and the creeping behaviors. However, the mechanics behind slow earthquakes, and the period differential motion between the subducting slab and the overlying plate below the seismogenic zone, remain controversial. In Nankai subduction zone, the very-low-frequency earthquakes (VLFE), megathrust earthquakes, long-term slow earthquakes (duration of months or years) and the episodic tremor and slip zone (ETS) are located within the accretionary prism, the continental upper crust, the continental lower crust and the upmost mantle of the overriding plate, respectively. We use the rate-and-state friction law to simulate the periodic occurrence of VLFEs, megathrust earthquakes and the tremors in the ETS zone because of relatively high rock strength within these depth ranges. However, it is not feasible to use frictional instabilities to explain the long-term slow earthquakes in the lower crust where the ductile rock physics plays a significant role in the large-scale deformation. Here, our numerical simulations show that slow earthquakes at the depth of the lower crust may be the results of plastic instabilities in a finite volume of ductile material accompanying by the grain-size evolution. As the thickness of the fault zone increases with depth, deformation becomes distributed and the dynamic equilibrium of grain size, as a competition between thermally activated grain growth and damage-related grain size reduction, results in cycles of strain acceleration and strain deficit. In addition, we took into account the elevated pore pressure in the accretinary prism which is associated with small stress drop and low-frequency content of VLFEs and may contribute to the occurrence of tsunamigenic earthquakes. Hence, in our numerical simulations for the plate boundary system in Nankai, the down-sip segmentation of the subduction dynamic is attributed to the upper plate structure that vary with depth. The high pore pressure, grain-size evolution and alternation of the rock physics may explain the existence and the periodicity of different slow earthquakes from shallow to deep regions in the subduction zone.
Tsunami Numerical Simulation for Hypothetical Giant or Great Earthquakes along the Izu-Bonin Trench
NASA Astrophysics Data System (ADS)
Harada, T.; Ishibashi, K.; Satake, K.
2013-12-01
We performed tsunami numerical simulations from various giant/great fault models along the Izu-Bonin trench in order to see the behavior of tsunamis originated in this region and to examine the recurrence pattern of great interplate earthquakes along the Nankai trough off southwest Japan. As a result, large tsunami heights are expected in the Ryukyu Islands and on the Pacific coasts of Kyushu, Shikoku and western Honshu. The computed large tsunami heights support the hypothesis that the 1605 Keicho Nankai earthquake was not a tsunami earthquake along the Nankai trough but a giant or great earthquake along the Izu-Bonin trench (Ishibashi and Harada, 2013, SSJ Fall Meeting abstract). The Izu-Bonin subduction zone has been regarded as so-called 'Mariana-type subduction zone' where M>7 interplate earthquakes do not occur inherently. However, since several M>7 outer-rise earthquakes have occurred in this region and the largest slip of the 2011 Tohoku earthquake (M9.0) took place on the shallow plate interface where the strain accumulation had considered to be a little, a possibility of M>8.5 earthquakes in this region may not be negligible. The latest M 7.4 outer-rise earthquake off the Bonin Islands on Dec. 22, 2010 produced small tsunamis on the Pacific coast of Japan except for the Tohoku and Hokkaido districts and a zone of abnormal seismic intensity in the Kanto and Tohoku districts. Ishibashi and Harada (2013) proposed a working hypothesis that the 1605 Keicho earthquake which is considered a great tsunami earthquake along the Nankai trough was a giant/great earthquake along the Izu-Bonin trench based on the similarity of the distributions of ground shaking and tsunami of this event and the 2010 Bonin earthquake. In this study, in order to examine the behavior of tsunamis from giant/great earthquakes along the Izu-Bonin trench and check the Ishibashi and Harada's hypothesis, we performed tsunami numerical simulations from fault models along the Izu-Bonin trench. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with Corioli's force (Satake, 1995, PAGEOPH) in the area of 130 - 145°E and 25 - 37°N. The 15-seconds gridded bathymetry data are used. The tsunami propagations for eight hours since the faulting of the various fault models were computed. As a result, large tsunamis from assumed giant/great both interplate and outer-rise earthquakes reach the Ryukyu Islands' coasts and the Pacific coasts of Kyushu, Shikoku and western Honshu west of Kanto. Therefore, the tsunami simulations support the Ishibashi and Harada's hypothesis. At the time of writing, the best yet preliminary model to reproduce the 1605 tsunami heights is an outer-rise steep fault model which extends 26.5 - 29.0°N (300 km of length) and with 16.7 m of average slip (Mw 8.6). We will examine tsunami behavior in the Pacific Ocean from this fault model. To examine our results, field investigations of tsunami deposits in the Bonin Islands and discussions on plate dynamics and seismogenic characteristics along the Izu-Bonin trench are necessary.
NASA Astrophysics Data System (ADS)
Xu, Peibin; Wen, Ruizhi; Wang, Hongwei; Ji, Kun; Ren, Yefei
2015-02-01
The Ludian County of Yunnan Province in southwestern China was struck by an M S6.5 earthquake on August 3, 2014, which was another destructive event following the M S8.0 Wenchuan earthquake in 2008, M S7.1 Yushu earthquake in 2010, and M S7.0 Lushan earthquake in 2013. National Strong-Motion Observation Network System of China collected 74 strong motion recordings, which the maximum peak ground acceleration recorded by the 053LLT station in Longtoushan Town was 949 cm/s2 in E-W component. The observed PGAs and spectral ordinates were compared with ground-motion prediction equation in China and the NGA-West2 developed by Pacific Earthquake Engineering Researcher Center. This earthquake is considered as the first case for testing applicability of NGA-West2 in China. Results indicate that the observed PGAs and the 5 % damped pseudo-response spectral accelerations are significantly lower than the predicted ones. The field survey around some typical strong motion stations verified that the earthquake damage was consistent with the official isoseismal by China Earthquake Administration.
NASA Astrophysics Data System (ADS)
Kaneda, Yoshiyuki; Ozener, Haluk; Meral Ozel, Nurcan; Kalafat, Dogan; Ozgur Citak, Seckin; Takahashi, Narumi; Hori, Takane; Hori, Muneo; Sakamoto, Mayumi; Pinar, Ali; Oguz Ozel, Asim; Cevdet Yalciner, Ahmet; Tanircan, Gulum; Demirtas, Ahmet
2017-04-01
There have been many destructive earthquakes and tsunamis in the world.The recent events are, 2011 East Japan Earthquake/Tsunami in Japan, 2015 Nepal Earthquake and 2016 Kumamoto Earthquake in Japan, and so on. And very recently a destructive earthquake occurred in Central Italy. In Turkey, the 1999 Izmit Earthquake as the destructive earthquake occurred along the North Anatolian Fault (NAF). The NAF crosses the Sea of Marmara and the only "seismic gap" remains beneath the Sea of Marmara. Istanbul with high population similar to Tokyo in Japan, is located around the Sea of Marmara where fatal damages expected to be generated as compound damages including Tsunami and liquefaction, when the next destructive Marmara Earthquake occurs. The seismic risk of Istanbul seems to be under the similar risk condition as Tokyo in case of Nankai Trough earthquake and metropolitan earthquake. It was considered that Japanese and Turkish researchers can share their own experiences during past damaging earthquakes and can prepare for the future large earthquakes in cooperation with each other. Therefore, in 2013 the two countries, Japan and Turkey made an agreement to start a multidisciplinary research project, MarDiM SATREPS. The Project runs researches to aim to raise the preparedness for possible large-scale earthquake and Tsunami disasters in Marmara Region and it has four research groups with the following goals. 1) The first one is Marmara Earthquake Source region observational research group. This group has 4 sub-groups such as Seismicity, Geodesy, Electromagnetics and Trench analyses. Preliminary results such as seismicity and crustal deformation on the sea floor in Sea of Marmara have already achieved. 2) The second group focuses on scenario researches of earthquake occurrence along the North Anatolia Fault and precise tsunami simulation in the Marmara region. Research results from this group are to be the model of earthquake occurrence scenario in Sea of Marmara and the case studies with advanced tsunami simulation for measure cities. 3) Aims of the third group are improvements and constructions of seismic characterizations and damage predictions based on observation researches and precise simulations. Research results from this group will be very important for disaster measures. 4) The fourth group is promoting disaster educations using research result visuals. The mission of this group is very important for information dissemination and practical and effective disaster education in Turkey. The research results from all components will be integrated and utilized for disaster mitigation in Marmara region and disaster education in Turkey. Updated research results of the MarDiM SATREPS Project will be officially presented toward the end of the Project period, which is March 2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, Arben
GEN_SRF_4 is a computer program for generation kinematic earthquake rupture models for use in ground motion modeling and simulations of earthquakes. The output is an ascii SRF formatted file containing kinematic rupture parameters.
NASA Astrophysics Data System (ADS)
Wu, Stephen
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications. Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake. To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Nitsche Extended Finite Element Methods for Earthquake Simulation
NASA Astrophysics Data System (ADS)
Coon, Ethan T.
Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack propagation and material failure. While some theory and application of these methods exist, implementations for the simulation of networks of many cracks have not yet been considered. For my thesis, I implement and extend one such method, the eXtended Finite Element Method (XFEM), for use in static and dynamic models of fault networks. Once this machinery is developed, it is applied to open questions regarding the behavior of networks of faults, including questions of distributed deformation in fault systems and ensembles of magnitude, location, and frequency in repeat ruptures. The theory of XFEM is augmented to allow for solution of problems with alternating regimes of static solves for elastic stress conditions and short, dynamic earthquakes on networks of faults. This is accomplished using Nitsche's approach for implementing boundary conditions. Finally, an optimization problem is developed to determine tractions along the fault, enabling the calculation of frictional constraints and the rupture front. This method is verified via a series of static, quasistatic, and dynamic problems. Armed with this technique, we look at several problems regarding geometry within the earthquake cycle in which geometry is crucial. We first look at quasistatic simulations on a community fault model of Southern California, and model slip distribution across that system. We find the distribution of deformation across faults compares reasonably well with slip rates across the region, as constrained by geologic data. We find geometry can provide constraints for friction, and consider the minimization of shear strain across the zone as a function of friction and plate loading direction, and infer bounds on fault strength in the region. Then we consider the repeated rupture problem, modeling the full earthquake cycle over the course of many events on several fault geometries. In this work, we look at distributions of events, studying the effect of geometry on statistical metrics of event ensembles. Finally, this thesis is a proof of concept for the XFEM on earthquake cycle models on fault systems. We identify strengths and weaknesses of the method, and identify places for future improvement. We discuss the feasibility of the method's use in three dimensions, and find the method to be a strong candidate for future crustal deformation simulations.
Strong motion seismology in Mexico
NASA Astrophysics Data System (ADS)
Singh, S. K.; Ordaz, M.
1993-02-01
Since 1985, digital accelerographs have been installed along a 500 km segment above the Mexican subduction zone, at some inland sites which form an attenuation line between the Guerrero seismic gap and Mexico City, and in the Valley of Mexico. These networks have recorded a few large earthquakes and many moderate and small earthquakes. Analysis of the data has permitted a significant advance in the understanding of source characteristics, wave propagation and attenuation, and site effects. This, in turn, has permitted reliable estimations of ground motions from future earthquakes. This paper presents a brief summary of some important results which are having a direct bearing on current earthquake engineering practice in Mexico.
Modified two-layer social force model for emergency earthquake evacuation
NASA Astrophysics Data System (ADS)
Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi
2018-02-01
Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force model is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian earthquake in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in earthquake evacuation is simulated and reproduced. Comparing with the fundamental social force model and existing guided crowd model, the modified model reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.
Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Shen, W.; Zhong, Q.; Shi, B.
2012-12-01
Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the ground shaking intensity, and the results of the comparisons between the simulated and observed MMI for the 2004 Mw 6.0 Parkfield earthquake, the 2008 Mw 7.9Wenchuan earthquake and the 1976 Mw 7.6Tangshan earthquake is fairly well. Take Parkfield earthquake as example, the simulative result reflect the directivity effect and the influence of the shallow velocity structure well. On the other hand, the simulative data is in good agreement with the network data and NGA (Next Generation Attenuation). The consumed time depends on the number of the subfaults and the number of the grid point. For the 2004 Mw 6.0 Parkfield earthquake, the grid size we calculated is 2.5° × 2.5°, the grid space is 0.025°, and the total time consumed is about 1.3hours. For the 2008 Mw 7.9 Wenchuan earthquake, the grid size calculated is 10° × 10°, the grid space is 0.05°, the total number of grid point is more than 40,000, and the total time consumed is about 7.5 hours. For t the 1976 Mw 7.6 Tangshan earthquake, the grid size we calculated is 4° × 6°, the grid space is 0.05°, and the total time consumed is about 2.1 hours. The CPU we used is 3.40GHz, and such computational time could further reduce by using GPU computing technique and other parallel computing technique. This is also our next focus.
Earthquake Risk Mitigation in the Tokyo Metropolitan area
NASA Astrophysics Data System (ADS)
Hirata, N.; Sakai, S.; Kasahara, K.; Nakagawa, S.; Nanjo, K.; Panayotopoulos, Y.; Tsuruoka, H.
2010-12-01
Seismic disaster risk mitigation in urban areas constitutes a challenge through collaboration of scientific, engineering, and social-science fields. Examples of collaborative efforts include research on detailed plate structure with identification of all significant faults, developing dense seismic networks; strong ground motion prediction, which uses information on near-surface seismic site effects and fault models; earthquake resistant and proof structures; and cross-discipline infrastructure for effective risk mitigation just after catastrophic events. Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (magnitude M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that the M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. This earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area (2007-2011) was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. The results that are obtained in the respective fields will be integrated until project termination to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area. In this talk, we give an outline of our project as an example of collaborative research on earthquake risk mitigation. Discussion is extended to our effort in progress and scientific results obtained so far at the Earthquake Research Institute (ERI). ERI hosts the scientific part focusing on characterization of the plate structure and source faults in and around the Tokyo metropolitan area. One of the topics is ongoing deployment of seismic stations that constitute the Metropolitan Seismic Observation network (MeSO-net). We have deployed 226 stations with a 2-5 km interval in space. Based on seismic data obtained from the MeSO-net, we aim to reveal the detailed geometry of the subducting PSP.
NASA Astrophysics Data System (ADS)
Shanker, D.; Paudyal, ,; Singh, H.
2010-12-01
It is not only the basic understanding of the phenomenon of earthquake, its resistance offered by the designed structure, but the understanding of the socio-economic factors, engineering properties of the indigenous materials, local skill and technology transfer models are also of vital importance. It is important that the engineering aspects of mitigation should be made a part of public policy documents. Earthquakes, therefore, are and were thought of as one of the worst enemies of mankind. Due to the very nature of release of energy, damage is evident which, however, will not culminate in a disaster unless it strikes a populated area. The word mitigation may be defined as the reduction in severity of something. The Earthquake disaster mitigation, therefore, implies that such measures may be taken which help reduce severity of damage caused by earthquake to life, property and environment. While “earthquake disaster mitigation” usually refers primarily to interventions to strengthen the built environment, and “earthquake protection” is now considered to include human, social and administrative aspects of reducing earthquake effects. It should, however, be noted that reduction of earthquake hazards through prediction is considered to be the one of the effective measures, and much effort is spent on prediction strategies. While earthquake prediction does not guarantee safety and even if predicted correctly the damage to life and property on such a large scale warrants the use of other aspects of mitigation. While earthquake prediction may be of some help, mitigation remains the main focus of attention of the civil society. Present study suggests that anomalous seismic activity/ earthquake swarm existed prior to the medium size earthquakes in the Nepal Himalaya. The mainshocks were preceded by the quiescence period which is an indication for the occurrence of future seismic activity. In all the cases, the identified episodes of anomalous seismic activity were characterized by an extremely high annual earthquake frequency as compared to the preceding normal and the following gap episodes, and is the characteristics of the events in such an episode is causally related with the magnitude and the time of occurrence of the forthcoming earthquake. It is observed here that for the shorter duration of the preparatory time period, there will be the smaller mainshock, and vice-versa. The Western Nepal and the adjoining Tibet region are potential for the future medium size earthquakes. Accordingly, it has been estimated here that an earthquake with M 6.5 ± 0.5 may occur at any time from now onwards till December 2011 in the Western Nepal within an area bounded by 29.3°-30.5° N and 81.2°-81.9° E, in the focal depth range 10 -30 km.
NASA Astrophysics Data System (ADS)
Gong, Jianhua; McGuire, Jeffrey J.
2018-01-01
The interactions between the North American, Pacific, and Gorda plates at the Mendocino Triple Junction (MTJ) create one of the most seismically active regions in North America. The earthquakes rupture all three plate boundaries but also include considerable intraplate seismicity reflecting the strong internal deformation of the Gorda plate. Understanding the stress levels that drive these ruptures and estimating the locking state of the subduction interface are especially important topics for regional earthquake hazard assessment. However owing to the lack of offshore seismic and geodetic instruments, the rupture process of only a few large earthquakes near the MTJ have been studied in detail and the locking state of the subduction interface is not well constrained. In this paper, first, we use the second moments inversion method to study the rupture process of the January 28, 2015 Mw 5.7 earthquake on the Mendocino transform fault that was unusually well recorded by both onshore and offshore strong motion instruments. We estimate the rupture dimension to be approximately 6 km by 3 km corresponding to a stress drop of ∼4 MPa for a crack model. Next we investigate the frictional state of the subduction interface by simulating the afterslip that would be expected there as a result of the stress changes from the 2015 earthquake and a 2010 Mw 6.5 intraplate earthquake within the subducted Gorda plate. We simulate afterslip scenarios for a range of depths of the downdip end of the locked zone defined as the transition to velocity strengthening friction and calculate the corresponding surface deformation expected at onshore GPS monuments. We can rule out a very shallow downdip limit owing to the lack of a detectable signal at onshore GPS stations following the 2010 earthquake. Our simulations indicate that the locking depth on the slab surface is at least 14 km, which suggests that the next M8 earthquake rupture will likely reach the coastline and strong shaking should be expected there.
Are seismic hazard assessment errors and earthquake surprises unavoidable?
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir
2013-04-01
Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a demonstrated and sufficient justification of hazard assessment protocols; (b) a more complete learning of the actual range of earthquake hazards to local communities and populations, and (c) a more ethically responsible control over how seismic hazard and seismic risk is implemented to protect public safety. It follows that the international project GEM is on the wrong track, if it continues to base seismic risk estimates on the standard method to assess seismic hazard. The situation is not hopeless and could be improved dramatically due to available geological, geomorphologic, seismic, and tectonic evidences and data combined with deterministic pattern recognition methodologies, specifically, when intending to PREDICT PREDICTABLE, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades. It proves that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such info in advance extreme catastrophes, which are LOW PROBABILITY EVENTS THAT HAPPEN WITH CERTAINTY. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve to optimistic challenging issues of neo-deterministic Hazard Predictability.
Documentation for the Southeast Asia seismic hazard maps
Petersen, Mark; Harmsen, Stephen; Mueller, Charles; Haller, Kathleen; Dewey, James; Luco, Nicolas; Crone, Anthony; Lidke, David; Rukstales, Kenneth
2007-01-01
The U.S. Geological Survey (USGS) Southeast Asia Seismic Hazard Project originated in response to the 26 December 2004 Sumatra earthquake (M9.2) and the resulting tsunami that caused significant casualties and economic losses in Indonesia, Thailand, Malaysia, India, Sri Lanka, and the Maldives. During the course of this project, several great earthquakes ruptured subduction zones along the southern coast of Indonesia (fig. 1) causing additional structural damage and casualties in nearby communities. Future structural damage and societal losses from large earthquakes can be mitigated by providing an advance warning of tsunamis and introducing seismic hazard provisions in building codes that allow buildings and structures to withstand strong ground shaking associated with anticipated earthquakes. The Southeast Asia Seismic Hazard Project was funded through a United States Agency for International Development (USAID)—Indian Ocean Tsunami Warning System to develop seismic hazard maps that would assist engineers in designing buildings that will resist earthquake strong ground shaking. An important objective of this project was to discuss regional hazard issues with building code officials, scientists, and engineers in Thailand, Malaysia, and Indonesia. The code communities have been receptive to these discussions and are considering updating the Thailand and Indonesia building codes to incorporate new information (for example, see notes from Professor Panitan Lukkunaprasit, Chulalongkorn University in Appendix A).
A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva
2018-03-01
The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.
Mechanics of Granular Materials (MGM)
NASA Technical Reports Server (NTRS)
2000-01-01
The packing of particles can change radically during cyclic loading such as in an earthquake or when shaking a container to compact a powder. A large hole (1) is maintained by the particles sticking to each other. A small, counterclockwise strain (2) collapses the hole, and another large strain (3) forms more new holes which collapse when the strain reverses (4). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (after T.L. Youd, Packing Changes and Liquefaction Susceptibility, Journal of the Geotechnical Engieering Division, 103: GT8,918-922, 1977)(Credit: NASA/Marshall Space Flight Center.)(Credit: University of Colorado at Boulder).
2000-07-01
The packing of particles can change radically during cyclic loading such as in an earthquake or when shaking a container to compact a powder. A large hole (1) is maintained by the particles sticking to each other. A small, counterclockwise strain (2) collapses the hole, and another large strain (3) forms more new holes which collapse when the strain reverses (4). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (after T.L. Youd, Packing Changes and Liquefaction Susceptibility, Journal of the Geotechnical Engieering Division, 103: GT8,918-922, 1977)(Credit: NASA/Marshall Space Flight Center.)(Credit: University of Colorado at Boulder).
ERIC Educational Resources Information Center
Bautista, Nazan Uludag; Peters, Kari Nichole
2010-01-01
Can students build a house that is cost effective and strong enough to survive strong winds, heavy rains, and earthquakes? First graders in Ms. Peter's classroom worked like engineers to answer this question. They participated in a design challenge that required them to plan like engineers and build strong and cost-effective houses that would fit…
2000-07-01
Key persornel in the Mechanics of Granular Materials (MGM) experiment are Mark Lankton (Program Manager at University Colorado at Boulder), Susan Batiste (research assistance, UCB), and Stein Sture (principal investigator). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
2000-05-05
A test cell for Mechanics of Granular Materials (MGM) experiment is tested for long-term storage with water in the system as plarned for STS-107. This view shows the top of the sand column with the metal platten removed. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: University of Colorado at Boulder
2000-05-05
A test cell for Mechanics of Granular Materials (MGM) experiment is tested for long-term storage with water in the system as plarned for STS-107. This view shows the compressed sand column with the protective water jacket removed. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: University of Colorado at Boulder
1998-01-25
A test cell for Mechanics of Granular Materials (MGM) experiment is shown approximately 20 and 60 minutes after the start of an experiment on STS-89. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/Marshall Space Flight Center (MSFC)
Mechanics of Granular Materials (MGM) Test Cell
NASA Technical Reports Server (NTRS)
1998-01-01
A test cell for Mechanics of Granular Materials (MGM) experiment is shown approximately 20 and 60 minutes after the start of an experiment on STS-89. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/Marshall Space Flight Center (MSFC)
Mechanic of Granular Materials (MGM) Investigator
NASA Technical Reports Server (NTRS)
2000-01-01
Key persornel in the Mechanics of Granular Materials (MGM) experiment are Mark Lankton (Program Manager at University Colorado at Boulder), Susan Batiste (research assistance, UCB), and Stein Sture (principal investigator). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
Mechanics of Granular Materials Test Cell
NASA Technical Reports Server (NTRS)
1998-01-01
A test cell for Mechanics of Granular Materials (MGM) experiment is shown from all three sides by its video camera during STS-89. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: NASA/Marshall Space Flight Center (MSFC)
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2017-01-01
Earthquake ground‐motion recordings are scarce in the central and eastern United States (CEUS) for large‐magnitude events and at close distances. We use two different simulation approaches, a deterministic physics‐based method and a site‐based stochastic method, to simulate ground motions over a wide range of magnitudes. Drawing on previous results for the modeling of recordings from the 2011 Mw 5.8 Mineral, Virginia, earthquake and using the 2001 Mw 7.6 Bhuj, India, earthquake as a tectonic analog for a large magnitude CEUS event, we are able to calibrate the two simulation methods over this magnitude range. Both models show a good fit to the Mineral and Bhuj observations from 0.1 to 10 Hz. Model parameters are then adjusted to obtain simulations for Mw 6.5, 7.0, and 7.6 events in the CEUS. Our simulations are compared with the 2014 U.S. Geological Survey weighted combination of existing ground‐motion prediction equations in the CEUS. The physics‐based simulations show comparable response spectral amplitudes and a fairly similar attenuation with distance. The site‐based stochastic simulations suggest a slightly faster attenuation of the response spectral amplitudes with distance for larger magnitude events and, as a result, slightly lower amplitudes at distances greater than 200 km. Both models are plausible alternatives and, given the few available data points in the CEUS, can be used to represent the epistemic uncertainty in modeling of postulated CEUS large‐magnitude events.
UNLV’s environmentally friendly Science and Engineering Building is monitored for earthquake shaking
Kalkan, Erol; Savage, Woody; Reza, Shahneam; Knight, Eric; Tian, Ying
2013-01-01
The University of Nevada Las Vegas’ (UNLV) Science and Engineering Building is at the cutting edge of environmentally friendly design. As the result of a recent effort by the U.S. Geological Survey’s National Strong Motion Project in cooperation with UNLV, the building is now also in the forefront of buildings installed with structural monitoring systems to measure response during earthquakes. This is particularly important because this is the first such building in Las Vegas. The seismic instrumentation will provide essential data to better understand the structural performance of buildings, especially in this seismically active region.
Barkan, Roy; ten Brink, Uri S.
2010-01-01
The 18 November 1867 Virgin Island earthquake and the tsunami that closely followed caused considerable loss of life and damage in several places in the northeast Caribbean region. The earthquake was likely a manifestation of the complex tectonic deformation of the Anegada Passage, which cuts across the Antilles island arc between the Virgin Islands and the Lesser Antilles. In this article, we attempt to characterize the 1867 earthquake with respect to fault orientation, rake, dip, fault dimensions, and first tsunami wave propagating phase, using tsunami simulations that employ high-resolution multibeam bathymetry. In addition, we present new geophysical and geological observations from the region of the suggested earthquake source. Results of our tsunami simulations based on relative amplitude comparison limit the earthquake source to be along the northern wall of the Virgin Islands basin, as suggested by Reid and Taber (1920), or on the carbonate platform north of the basin, and not in the Virgin Islands basin, as commonly assumed. The numerical simulations suggest the 1867 fault was striking 120°–135° and had a mixed normal and left-lateral motion. First propagating wave phase analysis suggests a fault striking 300°–315° is also possible. The best-fitting rupture length was found to be relatively small (50 km), probably indicating the earthquake had a moment magnitude of ∼7.2. Detailed multibeam echo sounder surveys of the Anegada Passage bathymetry between St. Croix and St. Thomas reveal a scarp, which cuts the northern wall of the Virgin Islands basin. High-resolution seismic profiles further indicate it to be a reasonable fault candidate. However, the fault orientation and the orientation of other subparallel faults in the area are more compatible with right-lateral motion. For the other possible source region, no clear disruption in the bathymetry or seismic profiles was found on the carbonate platform north of the basin.
NASA Astrophysics Data System (ADS)
Wirth, E. A.; Frankel, A. D.; Vidale, J. E.; Stone, I.; Nasser, M.; Stephenson, W. J.
2017-12-01
The Cascadia subduction zone has a long history of M8 to M9 earthquakes, inferred from coastal subsidence, tsunami records, and submarine landslides. These megathrust earthquakes occur mostly offshore, and an improved characterization of the megathrust is critical for accurate seismic hazard assessment in the Pacific Northwest. We run numerical simulations of 50 magnitude 9 earthquake rupture scenarios on the Cascadia megathrust, using a 3-D velocity model based on geologic constraints and regional seismicity, as well as active and passive source seismic studies. We identify key parameters that control the intensity of ground shaking and resulting seismic hazard. Variations in the down-dip limit of rupture (e.g., extending rupture to the top of the non-volcanic tremor zone, compared to a completely offshore rupture) result in a 2-3x difference in peak ground acceleration (PGA) for the inland city of Seattle, Washington. Comparisons of our simulations to paleoseismic data suggest that rupture extending to the 1 cm/yr locking contour (i.e., mostly offshore) provides the best fit to estimates of coastal subsidence during previous Cascadia earthquakes, but further constraints on the down-dip limit from microseismicity, offshore geodetics, and paleoseismic evidence are needed. Similarly, our simulations demonstrate that coastal communities experience a four-fold increase in PGA depending upon their proximity to strong-motion-generating areas (i.e., high strength asperities) on the deeper portions of the megathrust. An improved understanding of the structure and rheology of the plate interface and accretionary wedge, and better detection of offshore seismicity, may allow us to forecast locations of these asperities during a future Cascadia earthquake. In addition to these parameters, the seismic velocity and attenuation structure offshore also strongly affects the resulting ground shaking. This work outlines the range of plausible ground motions from an M9 Cascadia earthquake, and highlights the importance of offshore studies for constraining critical parameters and seismic hazard in the Pacific Northwest.
Simulation of scenario earthquake influenced field by using GIS
Zuo, H.-Q.; Xie, L.-L.; Borcherdt, R.D.
1999-01-01
The method for estimating the site effect on ground motion specified by Borcherdt (1994a, 1994b) is briefly introduced in the paper. This method and the detail geological data and site classification data in San Francisco bay area of California, the United States, are applied to simulate the influenced field of scenario earthquake by GIS technology, and the software for simulating has been drawn up. The paper is a partial result of cooperative research project between China Seismological Bureau and US Geological Survey.
NASA Astrophysics Data System (ADS)
Futagami, Toru; Omoto, Shohei; Hamamoto, Kenichirou
This research describes the risk communication towards improvement in the local disaster prevention power for Gobusho town in Marugame city which is only a high density city area in Kagawa Pref. Specifically, the key persons and authors of the area report the practice research towards improvement in the local disaster prevention power by the PDCA cycle of the area, such as formation of local voluntary disaster management organizations and implementation of an emergency drill, applying the fire spreading simulation system in case of a big earthquake. The fire spreading simulation system in case of the big earthquake which authors are developing describes the role and subject which have been achieved to BCP of the local community as a support system.
NASA Astrophysics Data System (ADS)
de Groot, R. M.; Benthien, M. L.
2006-12-01
The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.
NASA Astrophysics Data System (ADS)
Toke, N.; Johnson, A.; Nelson, K.
2010-12-01
Earthquakes are one of the most widely covered geologic processes by the media. As a result students, even at the middle school level, arrive in the classroom with preconceptions about the importance and hazards posed by earthquakes. Therefore earthquakes represent not only an attractive topic to engage students when introducing tectonics, but also a means to help students understand the relationships between geologic processes, society, and engineering solutions. Facilitating understanding of the fundamental connections between science and society is important for the preparation of future scientists and engineers as well as informed citizens. Here, we present a week-long lesson designed to be implemented in five one hour sessions with classes of ~30 students. It consists of two inquiry-based mapping investigations, motivational presentations, and short readings that describe fundamental models of plate tectonics, faults, and earthquakes. The readings also provide examples of engineering solutions such as the Alaskan oil pipeline which withstood multi-meter surface offset in the 2002 Denali Earthquake. The first inquiry-based investigation is a lesson on tectonic plates. Working in small groups, each group receives a different world map plotting both topography and one of the following data sets: GPS plate motion vectors, the locations and types of volcanoes, the location of types of earthquakes. Using these maps and an accompanying explanation of the data each group’s task is to map plate boundary locations. Each group then presents a ~10 minute summary of the type of data they used and their interpretation of the tectonic plates with a poster and their mapping results. Finally, the instructor will facilitate a class discussion about how the data types could be combined to understand more about plate boundaries. Using student interpretations of real data allows student misconceptions to become apparent. Throughout the exercise we record student preconceptions and post them to a bulletin board. During the tectonics unit we use these preconceptions as teaching tools. We also archive the misconceptions via a website which will be available for use by the broader geoscience education community. The second student investigation focuses on understanding the impact earthquakes have on nearby cities. We use the example of the 2009 southern San Andreas Fault (SAF) shakeout scenario. Students again break into groups. Each group is given an aspect of urban infrastructure to study relative to the underlying geology and location of nearby faults. Their goal is to uncover potential urban infrastructure issues related to a major earthquake on the SAF. For example students will map transportation ways crossing the fault, the location of hospitals relative to forecasted shaking hazards, the location of poverty-stricken areas relative to shaking hazards, and utilities relative to fault crossings. Again, students are tasked with explaining their investigation and analyses to the class with ample time for discussion about potential ways to solve problems identified through their investigations.
NASA Astrophysics Data System (ADS)
Cydzik, K.; Hamilton, D.; Stenner, H. D.; Cattarossi, A.; Shrestha, P. L.
2009-12-01
The May 12, 2008 M7.9 Wenchuan Earthquake in Sichuan Province, China killed almost 90,000 people and affected a population of over 45.5 million throughout western China. Shaking caused the destruction of five million buildings, many of them homes and schools, and damaged 21 million other structures, inflicting devastating impacts to communities. Landslides, a secondary effect of the shaking, caused much of the devastation. Debris flows buried schools and homes, rock falls crushed cars, and rockslides, landslides, and rock avalanches blocked streams and rivers creating massive, unstable landslide dams, which formed “quake lakes” upstream of the blockages. Impassable roads made emergency access slow and extremely difficult. Collapses of buildings and structures large and small took the lives of many. Damage to infrastructure impaired communication, cut off water supplies and electricity, and put authorities on high alert as the integrity of large engineered dams were reviewed. During our field reconnaissance three months after the disaster, evidence of the extent of the tragedy was undeniably apparent. Observing the damage throughout Sichuan reminded us that earthquakes in the United States and throughout the world routinely cause widespread damage and destruction to lives, property, and infrastructure. The focus of this poster is to present observations and findings based on our field reconnaissance regarding the scale of earthquake destruction with respect to slope failures, landslide dams, damage to infrastructure (e.g., schools, engineered dams, buildings, roads, rail lines, and water resources facilities), human habitation within the region, and the mitigation and response effort to this catastrophe. This is presented in the context of the policy measures that could be developed to reduce risks of similar catastrophes. The rapid response of the Chinese government and the mobilization of the Chinese People’s Liberation Army to help the communities affected by the earthquake have allowed survivors to begin rebuilding their lives. However, the long-term impact of the earthquake continues to make headlines. Post-earthquake landslides and debris flows initiated by storm events have continued to impart devastation on the region. Events such as the Wenchuan Earthquake provide unique opportunities for engineers, scientists, and policy makers to collaborate for purposes of exploring the details of natural hazards and developing sound policies to protect lives and property in the future.
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
Aagaard, Brad T.; Graves, Robert W.; Rodgers, Arthur; Brocher, Thomas M.; Simpson, Robert W.; Dreger, Douglas; Petersson, N. Anders; Larsen, Shawn C.; Ma, Shuo; Jachens, Robert C.
2010-01-01
We simulate long-period (T>1.0–2.0 s) and broadband (T>0.1 s) ground motions for 39 scenario earthquakes (Mw 6.7–7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault, we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions, compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area, with about 50% of the urban area experiencing modified Mercalli intensity VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland earthquake and the 2007 Mw 5.45 Alum Rock earthquake show that the U.S. Geological Survey’s Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area for Hayward fault earthquakes, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions for the suite of scenarios exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute much of this difference to the seismic velocity structure in the San Francisco Bay area and how the NGA models account for basin amplification; the NGA relations may underpredict amplification in shallow sedimentary basins. The simulations also suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by increasing the areal extent of rupture directivity with period.
NASA Astrophysics Data System (ADS)
Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.
2018-05-01
On 25th April, 2015 a hazardous earthquake of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal earthquake which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the earthquake is about 420-515 km. Modified semi-empirical technique of modeling finite faults has been used in this paper to simulate strong earthquake at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune model in the present study which are used in the modeling of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune model .The simulated earthquake time series were compared with the observed records of the earthquake. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the earthquake was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.
Tsunamigenic earthquake simulations using experimentally derived friction laws
NASA Astrophysics Data System (ADS)
Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.
2018-03-01
Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.
NASA Astrophysics Data System (ADS)
Huang, Ying; Bevans, W. J.; Xiao, Hai; Zhou, Zhi; Chen, Genda
2012-04-01
During or after an earthquake event, building system often experiences large strains due to shaking effects as observed during recent earthquakes, causing permanent inelastic deformation. In addition to the inelastic deformation induced by the earthquake effect, the post-earthquake fires associated with short fuse of electrical systems and leakage of gas devices can further strain the already damaged structures during the earthquakes, potentially leading to a progressive collapse of buildings. Under these harsh environments, measurements on the involved building by various sensors could only provide limited structural health information. Finite element model analysis, on the other hand, if validated by predesigned experiments, can provide detail structural behavior information of the entire structures. In this paper, a temperature dependent nonlinear 3-D finite element model (FEM) of a one-story steel frame is set up by ABAQUS based on the cited material property of steel from EN 1993-1.2 and AISC manuals. The FEM is validated by testing the modeled steel frame in simulated post-earthquake environments. Comparisons between the FEM analysis and the experimental results show that the FEM predicts the structural behavior of the steel frame in post-earthquake fire conditions reasonably. With experimental validations, the FEM analysis of critical structures could be continuously predicted for structures in these harsh environments for a better assistant to fire fighters in their rescue efforts and save fire victims.
Post-earthquake building safety inspection: Lessons from the Canterbury, New Zealand, earthquakes
Marshall, J.; Jaiswal, Kishor; Gould, N.; Turner, F.; Lizundia, B.; Barnes, J.
2013-01-01
The authors discuss some of the unique aspects and lessons of the New Zealand post-earthquake building safety inspection program that was implemented following the Canterbury earthquake sequence of 2010–2011. The post-event safety assessment program was one of the largest and longest programs undertaken in recent times anywhere in the world. The effort engaged hundreds of engineering professionals throughout the country, and also sought expertise from outside, to perform post-earthquake structural safety inspections of more than 100,000 buildings in the city of Christchurch and the surrounding suburbs. While the building safety inspection procedure implemented was analogous to the ATC 20 program in the United States, many modifications were proposed and implemented in order to assess the large number of buildings that were subjected to strong and variable shaking during a period of two years. This note discusses some of the key aspects of the post-earthquake building safety inspection program and summarizes important lessons that can improve future earthquake response.
Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Highway Systems
Yashinsky, Mark
1998-01-01
This paper summarizes the impact of the Loma Prieta earthquake on highway systems. City streets, urban freeways, county roads, state routes, and the national highway system were all affected. There was damage to bridges, roads, tunnels, and other highway structures. The most serious damage occurred in the cities of San Francisco and Oakland, 60 miles from the fault rupture. The cost to repair and replace highways damaged by this earthquake was $2 billion. About half of this cost was to replace the Cypress Viaduct, a long, elevated double-deck expressway that had a devastating collapse which resulted in 42 deaths and 108 injuries. The earthquake also resulted in some positive changes for highway systems. Research on bridges and earthquakes began to be funded at a much higher level. Retrofit programs were started to upgrade the seismic performance of the nation's highways. The Loma Prieta earthquake changed earthquake policy and engineering practice for highway departments not only in California, but all over the world.
Celsi, R.; Wolfinbarger, M.; Wald, D.
2005-01-01
The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.
Reduction of earthquake risk in the united states: Bridging the gap between research and practice
Hays, W.W.
1998-01-01
Continuing efforts under the auspices of the National Earthquake Hazards Reduction Program are under way to improve earthquake risk assessment and risk management in earthquake-prone regions of Alaska, California, Nevada, Washington, Oregon, Arizona, Utah, Wyoming, and Idaho, the New Madrid and Wabash Valley seismic zones in the central United States, the southeastern and northeastern United States, Puerto Rico, Virgin Islands, Guam, and Hawaii. Geologists, geophysicists, seismologists, architects, engineers, urban planners, emergency managers, health care specialists, and policymakers are having to work at the margins of their disciplines to bridge the gap between research and practice and to provide a social, technical, administrative, political, legal, and economic basis for changing public policies and professional practices in communities where the earthquake risk is unacceptable. ?? 1998 IEEE.
A global building inventory for earthquake loss estimation and risk management
Jaiswal, K.; Wald, D.; Porter, K.
2010-01-01
We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, D.; Kintzer, F.C.
1977-11-01
The correlation between ground motion and building damage was investigated for the San Fernando earthquake of 1971. A series of iso-intensity maps was compiled to summarize the ground motion in terms of the Blume Engineering Intensity Scale (EIS). This involved the analysis of ground motion records from 62 stations in the Los Angeles area. Damage information for low-rise buildings was obtained in the form of records of loans granted by the Small Business Administration to repair earthquake damage. High-rise damage evaluations were based on direct inquiry and building inspection. Damage factors (ratio of damage repair cost to building value) weremore » calculated and summarized on contour maps. A statistical study was then undertaken to determine relationships between ground motion and damage factor. Several parameters for ground motion were considered and evaluated by means of correlation coefficients.« less
ShakeNet: a portable wireless sensor network for instrumenting large civil structures
Kohler, Monica D.; Hao, Shuai; Mishra, Nilesh; Govindan, Ramesh; Nigbor, Robert
2015-08-03
We report our findings from a U.S. Geological Survey (USGS) National Earthquake Hazards Reduction Program-funded project to develop and test a wireless, portable, strong-motion network of up to 40 triaxial accelerometers for structural health monitoring. The overall goal of the project was to record ambient vibrations for several days from USGS-instrumented structures. Structural health monitoring has important applications in fields like civil engineering and the study of earthquakes. The emergence of wireless sensor networks provides a promising means to such applications. However, while most wireless sensor networks are still in the experimentation stage, very few take into consideration the realistic earthquake engineering application requirements. To collect comprehensive data for structural health monitoring for civil engineers, high-resolution vibration sensors and sufficient sampling rates should be adopted, which makes it challenging for current wireless sensor network technology in the following ways: processing capabilities, storage limit, and communication bandwidth. The wireless sensor network has to meet expectations set by wired sensor devices prevalent in the structural health monitoring community. For this project, we built and tested an application-realistic, commercially based, portable, wireless sensor network called ShakeNet for instrumentation of large civil structures, especially for buildings, bridges, or dams after earthquakes. Two to three people can deploy ShakeNet sensors within hours after an earthquake to measure the structural response of the building or bridge during aftershocks. ShakeNet involved the development of a new sensing platform (ShakeBox) running a software suite for networking, data collection, and monitoring. Deployments reported here on a tall building and a large dam were real-world tests of ShakeNet operation, and helped to refine both hardware and software.
DEVELOPMENT OF USER-FRIENDLY SIMULATION SYSTEM OF EARTHQUAKE INDUCED URBAN SPREADING FIRE
NASA Astrophysics Data System (ADS)
Tsujihara, Osamu; Gawa, Hidemi; Hayashi, Hirofumi
In the simulation of earthquake induced urban spreading fire, the produce of the analytical model of the target area is required as well as the analysis of spreading fire and the presentati on of the results. In order to promote the use of the simulation, it is important that the simulation system is non-intrusive and the analysis results can be demonstrated by the realistic presentation. In this study, the simulation system is developed based on the Petri-net algorithm, in which the easy operation can be realized in the modeling of the target area of the simulation through the presentation of analytical results by realistic 3-D animation.
NASA Astrophysics Data System (ADS)
Özyaşar, M.; Özlüdemir, M. T.
2011-06-01
Global Navigation Satellite Systems (GNSS) are space based positioning techniques and widely used in geodetic applications. Geodetic networking accomplished by engineering surveys constitutes one of these tasks. Geodetic networks are used as the base of all kinds of geodetic implementations, Co from the cadastral plans to the relevant surveying processes during the realization of engineering applications. Geodetic networks consist of control points positioned in a defined reference frame. In fact, such positional information could be useful for other studies as well. One of such fields is geodynamic studies that use the changes of positions of control stations within a network in a certain time period to understand the characteristics of tectonic movements. In Turkey, which is located in tectonically active zones and struck by major earthquakes quite frequently, the positional information obtained in engineering surveys could be very useful for earthquake related studies. For this purpose, a GPS (Global Positioning System) network of 650 stations distributed over Istanbul (Istanbul GPS Triangulation Network; abbreviated IGNA) covering the northern part of the North Anatolian Fault Zone (NAFZ) was established in 1997 and measured in 1999. From 1998 to 2004, the IGNA network was extended to 1888 stations covering an area of about 6000 km2, the whole administration area of Istanbul. All 1888 stations within the IGNA network were remeasured in 2005. In these two campaigns there existed 452 common points, and between these two campaigns two major earthquakes took place, on 17 August and 12 November 1999 with a Richter scale magnitude of 7.4 and 7.2, respectively. Several studies conducted for estimating the horizontal and vertical displacements as a result of these earthquakes on NAFZ are discussed in this paper. In geodynamic projects carried out before the earthquakes in 1999, an annual average velocity of 2-2.5 cm for the stations along the NAFZ were estimated. Studies carried out using GPS observations in the same area after these earthquakes indicated that point displacements vary depending on their distance to the epicentres of the earthquakes. But the directions of point displacements are similar. The results obtained through the analysis of the IGNA network also show that there is a common trend in the directions of point displacements in the study area. In this paper, the past studies about the tectonics of Marmara region are summarised and the results of the displacement analysis on the IGNA network are discussed.
GIA induced intraplate seismicity in northern Central Europe
NASA Astrophysics Data System (ADS)
Brandes, Christian; Steffen, Holger; Steffen, Rebekka; Wu, Patrick
2015-04-01
Though northern Central Europe is regarded as a low seismicity area (Leydecker and Kopera, 1999), several historic earthquakes with intensities of up to VII affected the area in the last 1200 years (Leydecker, 2011). The trigger for these seismic events is not sufficiently investigated yet. Based on the combination of historic earthquake epicentres with the most recent fault maps we show that the historic seismicity concentrated at major reverse faults. There is no evidence for significant historic earthquakes along normal faults in northern Central Europe. The spatial and temporal distribution of earthquakes (clusters that shift from time to time) implies that northern Central Europe behaves like a typical intraplate tectonic region as demonstrated for other intraplate settings (Liu et al., 2000) We utilized Finite Element models that describe the process of glacial isostatic adjustment to analyse the fault behaviour. We use the change in Coulomb Failure Stress (dCFS) to represent the minimum stress required to reach faulting. A negative dCFS value indicates that the fault is stable, while a positive value means that GIA stress is potentially available to induce faulting or cause fault instability or failure unless released temporarily by an earthquake. The results imply that many faults in Central Europe are postglacial faults, though they developed outside the glaciated area. This is supported by the characteristics of the dCFS graphs, which indicate the likelihood that an earthquake is related to GIA. Almost all graphs show a change from negative to positive values during the deglaciation phase. This observation sheds new light on the distribution of post-glacial faults in general. Based on field data and the numerical simulations we developed the first consistent model that can explain the occurrence of deglaciation seismicity and more recent historic earthquakes in northern Central Europe. Based on our model, the historic seismicity in northern Central Europe can be regarded as a kind of aftershock sequence of the GIA induced-seismicity. References Leydecker, G. and Kopera, J.R. Seismological hazard assessment for a site in Northern Germany, an area of low seismicity. Engineering Geology 52, 293-304 (1999). Leydecker, G. Erdbebenkatalog für die Bundesrepublik Deutschland mit Randgebieten für die Jahre 800-2008. Geologisches Jahrbuch Reihe E, 198 pp., (2011) Liu, M., Stein, S. and Wang, H. 2000 years of migrating earthquakes in north China: How earthquakes in midcontinents differ from those at plate boundaries. Lithosphere 3, 128-132, (2011).
The effect of segmented fault zones on earthquake rupture propagation and termination
NASA Astrophysics Data System (ADS)
Huang, Y.
2017-12-01
A fundamental question in earthquake source physics is what can control the nucleation and termination of an earthquake rupture. Besides stress heterogeneities and variations in frictional properties, damaged fault zones (DFZs) that surround major strike-slip faults can contribute significantly to earthquake rupture propagation. Previous earthquake rupture simulations usually characterize DFZs as several-hundred-meter-wide layers with lower seismic velocities than host rocks, and find earthquake ruptures in DFZs can exhibit slip pulses and oscillating rupture speeds that ultimately enhance high-frequency ground motions. However, real DFZs are more complex than the uniform low-velocity structures, and show along-strike variations of damages that may be correlated with historical earthquake ruptures. These segmented structures can either prohibit or assist rupture propagation and significantly affect the final sizes of earthquakes. For example, recent dense array data recorded at the San Jacinto fault zone suggests the existence of three prominent DFZs across the Anza seismic gap and the south section of the Clark branch, while no prominent DFZs were identified near the ends of the Anza seismic gap. To better understand earthquake rupture in segmented fault zones, we will present dynamic rupture simulations that calculate the time-varying rupture process physically by considering the interactions between fault stresses, fault frictional properties, and material heterogeneities. We will show that whether an earthquake rupture can break through the intact rock outside the DFZ depend on the nucleation size of the earthquake and the rupture propagation distance in the DFZ. Moreover, material properties of the DFZ, stress conditions along the fault, and friction properties of the fault also have a critical impact on rupture propagation and termination. We will also present scenarios of San Jacinto earthquake ruptures and show the parameter space that is favorable for rupture propagation through the Anza seismic gap. Our results suggest that a priori knowledge of properties of segmented fault zones is of great importance for predicting sizes of future large earthquakes on major faults.
Comparison of ground motions from hybrid simulations to nga prediction equations
Star, L.M.; Stewart, J.P.; Graves, R.W.
2011-01-01
We compare simulated motions for a Mw 7.8 rupture scenario on the San Andreas Fault known as the ShakeOut event, two permutations with different hypocenter locations, and a Mw 7.15 Puente Hills blind thrust scenario, to median and dispersion predictions from empirical NGA ground motion prediction equations. We find the simulated motions attenuate faster with distance than is predicted by the NGA models for periods less than about 5.0 s After removing this distance attenuation bias, the average residuals of the simulated events (i.e., event terms) are generally within the scatter of empirical event terms, although the ShakeOut simulation appears to be a high static stress drop event. The intraevent dispersion in the simulations is lower than NGA values at short periods and abruptly increases at 1.0 s due to different simulation procedures at short and long periods. The simulated motions have a depth-dependent basin response similar to the NGA models, and also show complex effects in which stronger basin response occurs when the fault rupture transmits energy into a basin at low angle, which is not predicted by the NGA models. Rupture directivity effects are found to scale with the isochrone parameter ?? 2011, Earthquake Engineering Research Institute.
Compendium of Abstracts on Statistical Applications in Geotechnical Engineering.
1983-09-01
research in the application of probabilistic and statistical methods to soil mechanics, rock mechanics, and engineering geology problems have grown markedly...probability, statistics, soil mechanics, rock mechanics, and engineering geology. 2. The purpose of this report is to make available to the U. S...Deformation Dynamic Response Analysis Seepage, Soil Permeability and Piping Earthquake Engineering, Seismology, Settlement and Heave Seismic Risk Analysis
Numerical simulation of the 1976 Ms7.8 Tangshan Earthquake
NASA Astrophysics Data System (ADS)
Li, Zhengbo; Chen, Xiaofei
2017-04-01
An Ms 7.8 earthquake happened in Tangshan in 1976, causing more than 240000 people death and almost destroying the whole city. Numerous studies indicated that the surface rupture zone extends 8 to 11 km in the south of Tangshan City. The fault system is composed with more than ten NE-trending right-lateral strike-slip left-stepping echelon faults, with a general strike direction of N30°E. However, recent scholars proposed that the surface ruptures appeared in a larger area. To simulate the rupture process closer to the real situation, the curvilinear grid finite difference method presented by Zhang et al. (2006, 2014) which can handle the free surface and the complex geometry were implemented to investigate the dynamic rupture and ground motion of Tangshan earthquake. With the data from field survey, seismic section, borehole and trenching results given by different studies, several fault geometry models were established. The intensity, the seismic waveform and the displacement resulted from the simulation of different models were compared with the observed data. The comparison of these models shows details of the rupture process of the Tangshan earthquake and implies super-shear may occur during the rupture, which is important for better understanding of this complicated rupture process and seismic hazard distributions of this earthquake.
NASA Technical Reports Server (NTRS)
1979-01-01
During NASA's Apollo program, it was necessary to subject the mammoth Saturn V launch vehicle to extremely forceful vibrations to assure the moonbooster's structural integrity in flight. Marshall Space Flight Center assigned vibration testing to a contractor, the Scientific Services and Systems Group of Wyle Laboratories, Norco, California. Wyle-3S, as the group is known, built a large facility at Huntsville, Alabama, and equipped it with an enormously forceful shock and vibration system to simulate the liftoff stresses the Saturn V would encounter. Saturn V is no longer in service, but Wyle-3S has found spinoff utility for its vibration facility. It is now being used to simulate earthquake effects on various kinds of equipment, principally equipment intended for use in nuclear power generation. Government regulations require that such equipment demonstrate its ability to survive earthquake conditions. In upper left photo, Wyle3S is preparing to conduct an earthquake test on a 25ton diesel generator built by Atlas Polar Company, Ltd., Toronto, Canada, for emergency use in a Canadian nuclear power plant. Being readied for test in the lower left photo is a large circuit breaker to be used by Duke Power Company, Charlotte, North Carolina. Electro-hydraulic and electro-dynamic shakers in and around the pit simulate earthquake forces.
NASA Astrophysics Data System (ADS)
Zhu, Gengshang; Zhang, Zhenguo; Wen, Jian; Zhang, Wei; Chen, Xiaofei
2013-08-01
The earthquake occurred in Lushan County on 20 April, 2013 caused heavy casualty and economic loss. In order to understand how the seismic energy propagates during this earthquake and how it causes the seismic hazard, we simulated the strong ground motions from a representative kinematic source model by Zhang et al. (Chin J Geophys 56(4):1408-1411, 2013) for this earthquake. To include the topographic effects, we used the curved grids finite difference method by Zhang and Chen (Geophys J Int 167(1):337-353, 2006), Zhang et al. (Geophys J Int 190(1):358-378, 2012) to implement the simulations. Our results indicated that the majority of seismic energy concentrated in the epicentral area and the vicinal Sichuan Basin, causing the XI and VII degree intensity. Due to the strong topographic effects of the mountain, the seismic intensity in the border area across the northeastern of Boxing County to the Lushan County also reached IX degree. Moreover, the strong influence of topography caused the amplifications of ground shaking at the mountain ridge, which is easy to cause landslides. These results are quite similar to those observed in the Wenchuan earthquake of 2008 occurred also in a strong topographic mountain area.
The Virtual Data Center Tagged-Format Tool - Introduction and Executive Summary
Evans, John R.; Squibb, Melinda; Stephens, Christopher D.; Savage, W.U.; Haddadi, Hamid; Kircher, Charles A.; Hachem, Mahmoud M.
2008-01-01
This Report introduces and summarizes the new Virtual Data Center (VDC) Tagged Format (VTF) Tool, which was developed by a diverse group of seismologists, earthquake engineers, and information technology professionals for internal use by the COSMOS VDC and other interested parties for the exchange, archiving, and analysis of earthquake strong-ground-motion data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackwell, Matt; Rodger, Arthur; Kennedy, Tom
When the California Academy of Sciences created the "Earthquake: Evidence of a Restless Planet" exhibit, they called on Lawrence Livermore to help combine seismic research with the latest data-driven visualization techniques. The outcome is a series of striking visualizations of earthquakes, tsunamis and tectonic plate evolution. Seismic-wave research is a core competency at Livermore. While most often associated with earthquakes, the research has many other applications of national interest, such as nuclear explosion monitoring, explosion forensics, energy exploration, and seismic acoustics. For the Academy effort, Livermore researchers simulated the San Andreas and Hayward fault events at high resolutions. Such calculationsmore » require significant computational resources. To simulate the 1906 earthquake, for instance, visualizing 125 seconds of ground motion required over 1 billion grid points, 10,000 time steps, and 7.5 hours of processor time on 2,048 cores of Livermore's Sierra machine.« less
Simulation of tsunamis from great earthquakes on the cascadia subduction zone.
Ng, M K; Leblond, P H; Murty, T S
1990-11-30
Large earthquakes occur episodically in the Cascadia subduction zone. A numerical model has been used to simulate and assess the hazards of a tsunami generated by a hypothetical earthquake of magnitude 8.5 associated with rupture of the northern sections of the subduction zone. Wave amplitudes on the outer coast are closely related to the magnitude of sea-bottom displacement (5.0 meters). Some amplification, up to a factor of 3, may occur in some coastal embayments. Wave amplitudes in the protected waters of Puget Sound and the Strait of Georgia are predicted to be only about one fifth of those estmated on the outer coast.
NASA Astrophysics Data System (ADS)
Van Daele, Maarten; Araya-Cornejo, Cristian; Pille, Thomas; Meyer, Inka; Kempf, Philipp; Moernaut, Jasper; Cisternas, Marco
2017-04-01
One of the main challenges in seismically active regions is differentiating paleo-earthquakes resulting from different fault systems, such as the megathrust versus intraplate faults in subductions settings. Such differentiation is, however, key for hazard assessments based on paleoseismic records. Laguna Lo Encañado (33.7°S; 70.3°W; 2492 m a.s.l.) is located in the Central Chilean Andes, 50 km east of Santiago de Chile, a metropole with about 7,000,000 inhabitants. During the last century the study area experienced 3 large megathrust earthquakes (1906, 1985 and 2010) and 2 intraplate earthquakes (1945 and 1958) (Lomnitz, 1960). While the megathrust earthquakes cause Modified Mercalli Intensities (MMIs) of VI to VII at the lake (Van Daele et al., 2015), the intraplate earthquakes cause peak MMIs up to IX (Sepúlveda et al., 2008). Here we present a turbidite record of Laguna Lo Encañado going back to 1900 AD. While geophysical data (3.5 kHz subbottom seismic profiles and side-scan sonar data) provides a bathymetry and an overview of the sedimentary environment, we study 15 short cores in order to understand the depositional processes resulting in the encountered lacustrine turbidites. All mentioned earthquakes triggered turbidites in the lake, which are all linked to slumps in proximal areas, and are thus resulting from mass wasting of the subaquatic slopes. However, turbidites linked to the intraplate earthquakes are additionally covered by turbidites of a finer-grained, more clastic nature. We link the latter to post-seismic erosion of onshore landslides, which need higher MMIs to be triggered than subaquatic mass movements (Howarth et al., 2014). While intraplate earthquakes can cause MMIs up to IX and higher, megathrust earthquakes do not cause sufficiently high MMIs at the lake to trigger voluminous onshore landslides. Hence, the presence of these post-seismic turbidites allows to distinguish turbidites triggered by intraplate earthquakes from those triggered by megathrust earthquakes. These findings are an important step forward in the interpretation of lacustrine turbidites in subduction settings, and will eventually improve hazard assessments based on such paleoseismic records in the study area, and in other subduction zones. References Howarth et al., 2014. Lake sediments record high intensity shaking that provides insight into the location and rupture length of large earthquakes on the Alpine Fault, New Zealand. Earth and Planetary Science Letters 403, 340-351. Lomnitz, 1960. A study of the Maipo Valley earthquakes of September 4, 1958, Second World Conference on Earthquake Engineering, Tokyo and Kyoto, Japan, pp. 501-520. Sepulveda et al., 2008. New Findings on the 1958 Las Melosas Earthquake Sequence, Central Chile: Implications for Seismic Hazard Related to Shallow Crustal Earthquakes in Subduction Zones. Journal of Earthquake Engineering 12, 432-455. Van Daele et al., 2015. A comparison of the sedimentary records of the 1960 and 2010 great Chilean earthquakes in 17 lakes: Implications for quantitative lacustrine palaeoseismology. Sedimentology 62, 1466-1496.
The Engineering Strong Ground Motion Network of the National Autonomous University of Mexico
NASA Astrophysics Data System (ADS)
Velasco Miranda, J. M.; Ramirez-Guzman, L.; Aguilar Calderon, L. A.; Almora Mata, D.; Ayala Hernandez, M.; Castro Parra, G.; Molina Avila, I.; Mora, A.; Torres Noguez, M.; Vazquez Larquet, R.
2014-12-01
The coverage, design, operation and monitoring capabilities of the strong ground motion program at the Institute of Engineering (IE) of the National Autonomous University of Mexico (UNAM) is presented. Started in 1952, the seismic instrumentation intended initially to bolster earthquake engineering projects in Mexico City has evolved into the largest strong ground motion monitoring system in the region. Today, it provides information not only to engineering projects, but also to the near real-time risk mitigation systems of the country, and enhances the general understanding of the effects and causes of earthquakes in Mexico. The IE network includes more than 100 free-field stations and several buildings, covering the largest urban centers and zones of significant seismicity in Central Mexico. Of those stations, approximately one-fourth send the observed acceleration to a processing center in Mexico City continuously, and the rest require either periodic visits for the manual recovery of the data or remote interrogation, for later processing and cataloging. In this research, we document the procedures and telecommunications systems used systematically to recover information. Additionally, we analyze the spatial distribution of the free-field accelerographs, the quality of the instrumentation, and the recorded ground motions. The evaluation criteria are based on the: 1) uncertainty in the generation of ground motion parameter maps due to the spatial distribution of the stations, 2) potential of the array to provide localization and magnitude estimates for earthquakes with magnitudes greater than Mw 5, and 3) adequacy of the network for the development of Ground Motion Prediction Equations due to intra-plate and intra-slab earthquakes. We conclude that the monitoring system requires a new redistribution, additional stations, and a substantial improvement in the instrumentation and telecommunications. Finally, we present an integral plan to improve the current network's monitoring capabilities.
An Investigation on the Crustal Deformations in Istanbul after Eastern Marmara Earthquakes in 1999
NASA Astrophysics Data System (ADS)
Ozludemir, M.; Ozyasar, M.
2008-12-01
Since the introduction of the GPS technique in mid 1970's there has been great advances in positioning activities. Today such Global Navigational Satellite Systems (GNSS) based positioning techniques are widely used in daily geodetic applications. High order geodetic network measurements are one of such geodetic applications. Such networks are established to provide reliable infrastructures for all kind of geodetic work from the production of cadastral plans to the surveying processes during the construction of engineering structures. In fact such positional information obtained in such engineering surveys could be useful for other studies as well. One of such fields is geodynamic studies where such positional information could be valuable to understand the characteristics of tectonic movements. In Turkey being located in a tectonically active zones and having major earthquakes quite frequently, the positional information obtained in engineering surveys could be very useful for earthquake related studies. In this paper an example of such engineering surveys is discussed. This example is the Istanbul GPS (Global Positioning System) Network, first established in 1997 and remeasured in 2005. Between these two measurement processes two major earthquakes took place, on August 17 and November 12, 1999 with magnitudes of 7.4 and 7.2, respectively. In the first measurement campaign in 1997, a network of about 700 points was measured, while in the second campaign in 2005 more than 1800 points were positioned. In these two campaigns are existing common points. The network covers the whole Istanbul area of about 6000 km2. All network points are located on the Eurasian plate to the north of the North Anatolian Fault Zone. In this study, the horizontal and vertical movements are presented and compared with the results obtained in geodynamic studies.
Response and recovery lessons from the 2010-2011 earthquake sequence in Canterbury, New Zealand
Pierepiekarz, Mark; Johnston, David; Berryman, Kelvin; Hare, John; Gomberg, Joan S.; Williams, Robert A.; Weaver, Craig S.
2014-01-01
The impacts and opportunities that result when low-probability moderate earthquakes strike an urban area similar to many throughout the US were vividly conveyed in a one-day workshop in which social and Earth scientists, public officials, engineers, and an emergency manager shared their experiences of the earthquake sequence that struck the city of Christchurch and surrounding Canterbury region of New Zealand in 2010-2011. Without question, the earthquake sequence has had unprecedented impacts in all spheres on New Zealand society, locally to nationally--10% of the country's population was directly impacted and losses total 8-10% of their GDP. The following paragraphs present a few lessons from Christchurch.
2013-02-01
Kathmandu, Nepal 5a. CONTRACT NUMBER W911NF-12-1-0282 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER ...In the past, big earthquakes in Nepal (see Figure 1.1) have caused a huge number of casualties and damage to structures. The Great Nepal -Bihar...UBC Earthquake Engineering Research Facility 2235 East Mall, Vancouver, BC, Canada V6T 1Z4 Phone : 604 822-6203 Fax: 604 822-6901 E-mail
NASA Astrophysics Data System (ADS)
Major, J. R.; Liu, Z.; Harris, R. A.; Fisher, T. L.
2011-12-01
Using Dutch records of geophysical events in Indonesia over the past 400 years, and tsunami modeling, we identify tsunami sources that have caused severe devastation in the past and are likely to reoccur in the near future. The earthquake history of Western Indonesia has received much attention since the 2004 Sumatra earthquakes and subsequent events. However, strain rates along a variety of plate boundary segments are just as high in eastern Indonesia where the earthquake history has not been investigated. Due to the rapid population growth in this region it is essential and urgent to evaluate its earthquake and tsunami hazards. Arthur Wichmann's 'Earthquakes of the Indian Archipelago' shows that there were 30 significant earthquakes and 29 tsunami between 1629 to 1877. One of the largest and best documented is the great earthquake and tsunami effecting the Banda islands on 1 August, 1629. It caused severe damage from a 15 m tsunami that arrived at the Banda Islands about a half hour after the earthquake. The earthquake was also recorded 230 km away in Ambon, but no tsunami is mentioned. This event was followed by at least 9 years of aftershocks. The combination of these observations indicates that the earthquake was most likely a mega-thrust event. We use a numerical simulation of the tsunami to locate the potential sources of the 1629 mega-thrust event and evaluate the tsunami hazard in Eastern Indonesia. The numerical simulation was tested to establish the tsunami run-up amplification factor for this region by tsunami simulations of the 1992 Flores Island (Hidayat et al., 1995) and 2006 Java (Katoet al., 2007) earthquake events. The results yield a tsunami run-up amplification factor of 1.5 and 3, respectively. However, the Java earthquake is a unique case of slow rupture that was hardly felt. The fault parameters of recent earthquakes in the Banda region are used for the models. The modeling narrows the possibilities of mega-thrust events the size of the one in 1629 to the Seram and Timor Troughs. For the Seram Trough source a Mw 8.8 produces run-up heights in the Banda Islands of 15.5 m with an arrival time of 17 minuets. For a Timor Trough earthquake near the Tanimbar Islands a Mw 9.2 is needed to produce a 15 m run-up height with an arrival time of 25 minuets. The main problem with the Timor Trough source is that it predicts run-up heights in Ambon of 10 m, which would likely have been recorded. Therefore, we conclude that the most likely source of the 1629 mega-thrust earthquake is the Seram Trough. No large earthquakes are reported along the Seram Trough for over 200 years although high rates of strain are measured across it. This study suggests that the earthquake triggers from this fault zone could be extremely devastating to Eastern Indonesia. We strive to raise the awareness to the local government to not underestimate the natural hazard of this region based on lessons learned from the 2004 Sumatra and 2011 Tohoku tsunamigenic mega-thrust earthquakes.
Akhlaghi, Tohid
2014-01-01
Evaluation of the accuracy of the pseudostatic approach is governed by the accuracy with which the simple pseudostatic inertial forces represent the complex dynamic inertial forces that actually exist in an earthquake. In this study, the Upper San Fernando and Kitayama earth dams, which have been designed using the pseudostatic approach and damaged during the 1971 San Fernando and 1995 Kobe earthquakes, were investigated and analyzed. The finite element models of the dams were prepared based on the detailed available data and results of in situ and laboratory material tests. Dynamic analyses were conducted to simulate the earthquake-induced deformations of the dams using the computer program Plaxis code. Then the pseudostatic seismic coefficient used in the design and analyses of the dams were compared with the seismic coefficients obtained from dynamic analyses of the simulated model as well as the other available proposed pseudostatic correlations. Based on the comparisons made, the accuracy and reliability of the pseudostatic seismic coefficients are evaluated and discussed. PMID:24616636
NASA Astrophysics Data System (ADS)
Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.
2012-12-01
We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a kinematic rupture for a Mw=8.4 earthquake with the method of Liu et al. (2006) for the Guerrero Gap and computed the ground motion. We summarized our results by presenting ground motion parameter maps and the potential population and infrastructure exposure to large Modified Mercalli Intensities (>VI).
HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes
NASA Astrophysics Data System (ADS)
Carlson, J. M.; Hillers, G.; Archuleta, R. J.
2006-12-01
We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a 2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.
Gori, Paula L.
1993-01-01
INTERACTIVE WORKSHOPS: ESSENTIAL ELEMENTS OF THE EARTHQUAKE HAZARDS RESEARCH AND REDUCTION PROGRAM IN THE WASATCH FRONT, UTAH: Interactive workshops provided the forum and stimulus necessary to foster collaboration among the participants in the multidisciplinary, 5-yr program of earthquake hazards reduction in the Wasatch Front, Utah. The workshop process validated well-documented social science theories on the importance of interpersonal interaction, including interaction between researchers and users of research to increase the probability that research will be relevant to the user's needs and, therefore, more readily used. REDUCING EARTHQUAKE HAZARDS IN UTAH: THE CRUCIAL CONNECTION BETWEEN RESEARCHERS AND PRACTITIONERS: Complex scientific and engineering studies must be translated for and transferred to nontechnical personnel for use in reducing earthquake hazards in Utah. The three elements needed for effective translation, likelihood of occurrence, location, and severity of potential hazards, and the three elements needed for effective transfer, delivery, assistance, and encouragement, are described and illustrated for Utah. The importance of evaluating and revising earthquake hazard reduction programs and their components is emphasized. More than 30 evaluations of various natural hazard reduction programs and techniques are introduced. This report was prepared for research managers, funding sources, and evaluators of the Utah earthquake hazard reduction program who are concerned about effectiveness. An overview of the Utah program is provided for those researchers, engineers, planners, and decisionmakers, both public and private, who are committed to reducing human casualties, property damage, and interruptions of socioeconomic systems. PUBLIC PERCEPTIONS OF THE IMPLEMENTATION OF EARTHQUAKE MITIGATION POLICIES ALONG THE WASATCH FRONT IN UTAH: The earthquake hazard potential along the Wasatch Front in Utah has been well defined by a number of scientific and engineering studies. Translated earthquake hazard maps have also been developed to identify areas that are particularly vulnerable to various causes of damage such as ground shaking, surface rupturing, and liquefaction. The implementation of earthquake hazard reduction plans are now under way in various communities in Utah. The results of a survey presented in this paper indicate that technical public officials (planners and building officials) have an understanding of the earthquake hazards and how to mitigate the risks. Although the survey shows that the general public has a slightly lower concern about the potential for economic losses, they recognize the potential problems and can support a number of earthquake mitigation measures. The study suggests that many community groups along the Wasatch Front, including volunteer groups, business groups, and elected and appointed officials, are ready for action-oriented educational programs. These programs could lead to a significant reduction in the risks associated with earthquake hazards. A DATA BASE DESIGNED FOR URBAN SEISMIC HAZARDS STUDIES: A computerized data base has been designed for use in urban seismic hazards studies conducted by the U.S. Geological Survey. The design includes file structures for 16 linked data sets, which contain geological, geophysical, and seismological data used in preparing relative ground response maps of large urban areas. The data base is organized along relational data base principles. A prototype urban hazards data base has been created for evaluation in two urban areas currently under investigation: the Wasatch Front region of Utah and the Puget Sound area of Washington. The initial implementation of the urban hazards data base was accomplished on a microcomputer using dBASE III Plus software and transferred to minicomputers and a work station. A MAPPING OF GROUND-SHAKING INTENSITIES FOR SALT LAKE COUNTY, UTAH: This paper documents the development of maps showing a
Empirical improvements for estimating earthquake response spectra with random‐vibration theory
Boore, David; Thompson, Eric M.
2012-01-01
The stochastic method of ground‐motion simulation is often used in combination with the random‐vibration theory to directly compute ground‐motion intensity measures, thereby bypassing the more computationally intensive time‐domain simulations. Key to the application of random‐vibration theory to simulate response spectra is determining the duration (Drms) used in computing the root‐mean‐square oscillator response. Boore and Joyner (1984) originally proposed an equation for Drms , which was improved upon by Liu and Pezeshk (1999). Though these equations are both substantial improvements over using the duration of the ground‐motion excitation for Drms , we document systematic differences between the ground‐motion intensity measures derived from the random‐vibration and time‐domain methods for both of these Drms equations. These differences are generally less than 10% for most magnitudes, distances, and periods of engineering interest. Given the systematic nature of the differences, however, we feel that improved equations are warranted. We empirically derive new equations from time‐domain simulations for eastern and western North America seismological models. The new equations improve the random‐vibration simulations over a wide range of magnitudes, distances, and oscillator periods.
Strong Ground Motion Analysis and Afterslip Modeling of Earthquakes near Mendocino Triple Junction
NASA Astrophysics Data System (ADS)
Gong, J.; McGuire, J. J.
2017-12-01
The Mendocino Triple Junction (MTJ) is one of the most seismically active regions in North America in response to the ongoing motions between North America, Pacific and Gorda plates. Earthquakes near the MTJ come from multiple types of faults due to the interaction boundaries between the three plates and the strong internal deformation within them. Understanding the stress levels that drive the earthquake rupture on the various types of faults and estimating the locking state of the subduction interface are especially important for earthquake hazard assessment. However due to lack of direct offshore seismic and geodetic records, only a few earthquakes' rupture processes have been well studied and the locking state of the subducted slab is not well constrained. In this study we first use the second moment inversion method to study the rupture process of the January 28, 2015 Mw 5.7 strike slip earthquake on Mendocino transform fault using strong ground motion records from Cascadia Initiative community experiment as well as onshore seismic networks. We estimate the rupture dimension to be of 6 km by 3 km and a stress drop of 7 MPa on the transform fault. Next we investigate the frictional locking state on the subduction interface through afterslip simulation based on coseismic rupture models of this 2015 earthquake and a Mw 6.5 intraplate eathquake inside Gorda plate whose slip distribution is inverted using onshore geodetic network in previous study. Different depths for velocity strengthening frictional properties to start at the downdip of the locked zone are used to simulate afterslip scenarios and predict the corresponding surface deformation (GPS) movements onshore. Our simulations indicate that locking depth on the slab surface is at least 14 km, which confirms that the next M8 earthquake rupture will likely reach the coastline and strong shaking should be expected near the coast.
The next new Madrid earthquake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atkinson, W.
1988-01-01
Scientists who specialize in the study of Mississippi Valley earthquakes say that the region is overdue for a powerful tremor that will cause major damage and undoubtedly some casualties. The inevitability of a future quake and the lack of preparation by both individuals and communities provided the impetus for this book. It brings together applicable information from many disciplines: history, geology and seismology, engineering, zoology, politics and community planning, economics, environmental science, sociology, and psychology and mental health to provide a perspective of the myriad impacts of a major earthquake on the Mississippi Valley. The author addresses such basic questionsmore » as What, actually, are earthquakes How do they occur Can they be predicted, perhaps even prevented He also addresses those steps that individuals can take to improve their chances for survival both during and after an earthquake.« less
Dynamics of folding: Impact of fault bend folds on earthquake cycles
NASA Astrophysics Data System (ADS)
Sathiakumar, S.; Barbot, S.; Hubbard, J.
2017-12-01
Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the earthquake cycle.
NASA Astrophysics Data System (ADS)
Kostenko, I. S.; Zaytsev, A. I.; Minaev, D. D.; Kurkin, A. A.; Pelinovsky, E. N.; Oshmarina, O. E.
2018-01-01
Observation data on the September 5, 1971, earthquake that occurred near the Moneron Island (Sakhalin) have been analyzed and a numerical simulation of the tsunami induced by this earthquake is conducted. The tsunami source identified in this study indicates that the observational data are in good agreement with the results of calculations performed on the basis of shallow-water equations.
DOT National Transportation Integrated Search
1994-02-01
The report contains an assessment of existing port infrastructure related to United States-Mexico trade, planned infrastructure improvements, an identification of current trade and transportation flows, and an assessment of emerging trade corridors. ...
The ShakeOut scenario: A hypothetical Mw7.8 earthquake on the Southern San Andreas Fault
Porter, K.; Jones, L.; Cox, D.; Goltz, J.; Hudnut, K.; Mileti, D.; Perry, S.; Ponti, D.; Reichle, M.; Rose, A.Z.; Scawthorn, C.R.; Seligson, H.A.; Shoaf, K.I.; Treiman, J.; Wein, A.
2011-01-01
In 2008, an earthquake-planning scenario document was released by the U.S. Geological Survey (USGS) and California Geological Survey that hypothesizes the occurrence and effects of a Mw7.8 earthquake on the southern San Andreas Fault. It was created by more than 300 scientists and engineers. Fault offsets reach 13 m and up to 8 m at lifeline crossings. Physics-based modeling was used to generate maps of shaking intensity, with peak ground velocities of 3 m/sec near the fault and exceeding 0.5 m/sec over 10,000 km2. A custom HAZUS??MH analysis and 18 special studies were performed to characterize the effects of the earthquake on the built environment. The scenario posits 1,800 deaths and 53,000 injuries requiring emergency room care. Approximately 1,600 fires are ignited, resulting in the destruction of 200 million square feet of the building stock, the equivalent of 133,000 single-family homes. Fire contributes $87 billion in property and business interruption loss, out of the total $191 billion in economic loss, with most of the rest coming from shakerelated building and content damage ($46 billion) and business interruption loss from water outages ($24 billion). Emergency response activities are depicted in detail, in an innovative grid showing activities versus time, a new format introduced in this study. ?? 2011, Earthquake Engineering Research Institute.
NASA Astrophysics Data System (ADS)
Perry, S.; Jordan, T.
2006-12-01
Our undergraduate research program, SCEC/UseIT, an NSF Research Experience for Undergraduates site, provides software for earthquake researchers and educators, movies for outreach, and ways to strengthen the technical career pipeline. SCEC/UseIT motivates diverse undergraduates towards science and engineering careers through team-based research in the exciting field of earthquake information technology. UseIT provides the cross-training in computer science/information technology (CS/IT) and geoscience needed to make fundamental progress in earthquake system science. Our high and increasing participation of women and minority students is crucial given the nation"s precipitous enrollment declines in CS/IT undergraduate degree programs, especially among women. UseIT also casts a "wider, farther" recruitment net that targets scholars interested in creative work but not traditionally attracted to summer science internships. Since 2002, SCEC/UseIT has challenged 79 students in three dozen majors from as many schools with difficult, real-world problems that require collaborative, interdisciplinary solutions. Interns design and engineer open-source software, creating increasingly sophisticated visualization tools (see "SCEC-VDO," session IN11), which are employed by SCEC researchers, in new curricula at the University of Southern California, and by outreach specialists who make animated movies for the public and the media. SCEC-VDO would be a valuable tool for research-oriented professional development programs.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.
NASA Astrophysics Data System (ADS)
Karimzadeh, Shaghayegh; Askan, Aysegul
2018-04-01
Located within a basin structure, at the conjunction of North East Anatolian, North Anatolian and Ovacik Faults, Erzincan city center (Turkey) is one of the most hazardous regions in the world. Combination of the seismotectonic and geological settings of the region has resulted in series of significant seismic activities including the 1939 (Ms 7.8) as well as the 1992 (Mw = 6.6) earthquakes. The devastative 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms. Thus, a limited number of studies exist on that earthquake. However, the 1992 event, despite the sparse local network at that time, has been studied extensively. This study aims to simulate the 1939 Erzincan earthquake using available regional seismic and geological parameters. Despite several uncertainties involved, such an effort to quantitatively model the 1939 earthquake is promising, given the historical reports of extensive damage and fatalities in the area. The results of this study are expressed in terms of anticipated acceleration time histories at certain locations, spatial distribution of selected ground motion parameters and felt intensity maps in the region. Simulated motions are first compared against empirical ground motion prediction equations derived with both local and global datasets. Next, anticipated intensity maps of the 1939 earthquake are obtained using local correlations between peak ground motion parameters and felt intensity values. Comparisons of the estimated intensity distributions with the corresponding observed intensities indicate a reasonable modeling of the 1939 earthquake.
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Performance evaluation of existing building structure with pushover analysis
NASA Astrophysics Data System (ADS)
Handana, MAP; Karolina, R.; Steven
2018-02-01
In the management of the infrastructure of the building, during the period of buildings common building damage as a result of several reasons, earthquakes are common. The building is planned to work for a certain service life. But during the certain service life, the building vulnerable to damage due to various things. Any damage to cultivate can be detected as early as possible, because the damage could spread, triggering and exacerbating the latest. The newest concept to earthquake engineering is Performance Based Earthquake Engineering (PBEE). PBEE divided into two, namely Performance Based Seismic Design (PBSD) and Performance Based Seismic Evaluation (PBSE). Evaluation on PBSE one of which is the analysis of nonlinear pushover. Pushover analysis is a static analysis of nonlinear where the influence of the earthquake plan on building structure is considered as burdens static catch at the center of mass of each floor, which it was increased gradually until the loading causing the melting (plastic hinge) first within the building structure, then the load increases further changes the shapes of post-elastic large it reached the condition of elastic. Then followed melting (plastic hinge) in the location of the other structured.
Analysis of post-earthquake landslide activity and geo-environmental effects
NASA Astrophysics Data System (ADS)
Tang, Chenxiao; van Westen, Cees; Jetten, Victor
2014-05-01
Large earthquakes can cause huge losses to human society, due to ground shaking, fault rupture and due to the high density of co-seismic landslides that can be triggered in mountainous areas. In areas that have been affected by such large earthquakes, the threat of landslides continues also after the earthquake, as the co-seismic landslides may be reactivated by high intensity rainfall events. Earthquakes create Huge amount of landslide materials remain on the slopes, leading to a high frequency of landslides and debris flows after earthquakes which threaten lives and create great difficulties in post-seismic reconstruction in the earthquake-hit regions. Without critical information such as the frequency and magnitude of landslides after a major earthquake, reconstruction planning and hazard mitigation works appear to be difficult. The area hit by Mw 7.9 Wenchuan earthquake in 2008, Sichuan province, China, shows some typical examples of bad reconstruction planning due to lack of information: huge debris flows destroyed several re-constructed settlements. This research aim to analyze the decay in post-seismic landslide activity in areas that have been hit by a major earthquake. The areas hit by the 2008 Wenchuan earthquake will be taken a study area. The study will analyze the factors that control post-earthquake landslide activity through the quantification of the landslide volume changes well as through numerical simulation of their initiation process, to obtain a better understanding of the potential threat of post-earthquake landslide as a basis for mitigation planning. The research will make use of high-resolution stereo satellite images, UAV and Terrestrial Laser Scanning(TLS) to obtain multi-temporal DEM to monitor the change of loose sediments and post-seismic landslide activities. A debris flow initiation model that incorporates the volume of source materials, vegetation re-growth, and intensity-duration of the triggering precipitation, and that evaluates different initiation mechanisms such as erosion and landslide reactivation will be developed. The developed initiation model will be integrated with run-out model to simulate the dynamic process of post-earthquake debris flows in the study area for a future period and make a prediction about the decay of landslide activity in future.
Determination of source process and the tsunami simulation of the 2013 Santa Cruz earthquake
NASA Astrophysics Data System (ADS)
Park, S. C.; Lee, J. W.; Park, E.; Kim, S.
2014-12-01
In order to understand the characteristics of large tsunamigenic earthquakes, we analyzed the earthquake source process of the 2013 Santa Cruz earthquake and simulated the following tsunami. We first estimated the fault length of about 200 km using 3-day aftershock distribution and the source duration of about 110 seconds using the duration of high-frequency energy radiation (Hara, 2007). Moment magnitude was estimated to be 8.0 using the formula of Hara (2007). From the results of 200 km of fault length and 110 seconds of source duration, we used the initial value of rupture velocity as 1.8 km/s for teleseismic waveform inversions. Teleseismic body wave inversion was carried out using the inversion package by Kikuchi and Kanamori (1991). Teleseismic P waveform data from 14 stations were used and band-pass filter of 0.005 ~ 1 Hz was applied. Our best-fit solution indicated that the earthquake occurred on the northwesterly striking (strike = 305) and shallowly dipping (dip = 13) fault plane. Focal depth was determined to be 23 km indicating shallow event. Moment magnitude of 7.8 was obtained showing somewhat smaller than the result obtained above and that of previous study (Lay et al., 2013). Large slip area was seen around the hypocenter. Using the slip distribution obtained by teleseismic waveform inversion, we calculated the surface deformations using formulas of Okada (1985) assuming as the initial change of sea water by tsunami. Then tsunami simulation was carred out using Conell Multi-grid Coupled Tsunami Model (COMCOT) code and 1 min-grid topographic data for water depth from the General Bathymetric Chart of the Ocenas (GEBCO). According to the tsunami simulation, most of tsunami waves propagated to the directions of southwest and northeast which are perpendicular to the fault strike. DART buoy data were used to verify our simulation. In the presentation, we will discuss more details on the results of source process and tsunami simulation and compare them with the previous study.
Historical earthquake research in Austria
NASA Astrophysics Data System (ADS)
Hammerl, Christa
2017-12-01
Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi
2014-01-01
We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.
ERIC Educational Resources Information Center
Cavlazoglu, Baki; Stuessy, Carol L.
2017-01-01
Stakeholders in STEM education have called for integrating engineering content knowledge into STEM-content classrooms. To answer the call, stakeholders in science education announced a new framework, Next Generation Science Standards, which focuses on the integration of science and engineering in K-12 science education. However, research indicates…
Update on the Center for Engineering Strong Motion Data
NASA Astrophysics Data System (ADS)
Haddadi, H. R.; Shakal, A. F.; Stephens, C. D.; Oppenheimer, D. H.; Huang, M.; Leith, W. S.; Parrish, J. G.; Savage, W. U.
2010-12-01
The U.S. Geological Survey (USGS) and the California Geological Survey (CGS) established the Center for Engineering Strong-Motion Data (CESMD, Center) to provide a single access point for earthquake strong-motion records and station metadata from the U.S. and international strong-motion programs. The Center has operational facilities in Sacramento and Menlo Park, California, to receive, process, and disseminate records through the CESMD web site at www.strongmotioncenter.org. The Center currently is in the process of transitioning the COSMOS Virtual Data Center (VDC) to integrate its functions with those of the CESMD for improved efficiency of operations, and to provide all users with a more convenient one-stop portal to both U.S. and important international strong-motion records. The Center is working with COSMOS and international and U.S. data providers to improve the completeness of site and station information, which are needed to most effectively employ the recorded data. The goal of all these and other new developments is to continually improve access by the earthquake engineering community to strong-motion data and metadata world-wide. The CESMD and its Virtual Data Center (VDC) provide tools to map earthquakes and recording stations, to search raw and processed data, to view time histories and spectral plots, to convert data files formats, and to download data and a variety of information. The VDC is now being upgraded to convert the strong-motion data files from different seismic networks into a common standard tagged format in order to facilitate importing earthquake records and station metadata to the CESMD database. An important new feature being developed is the automatic posting of Internet Quick Reports at the CESMD web site. This feature will allow users, and emergency responders in particular, to view strong-motion waveforms and download records within a few minutes after an earthquake occurs. Currently the CESMD and its Virtual Data Center provide selected strong-motion records from 17 countries. The Center has proved to be significantly useful for providing data to scientists, engineers, policy makers, and emergency response teams around the world.
Forecast model for great earthquakes at the Nankai Trough subduction zone
Stuart, W.D.
1988-01-01
An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.
Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake
Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J.; Petersen, M.D.
2004-01-01
The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.
Preparing for a "Big One": The great southern California shakeout
Jones, L.M.; Benthien, M.
2011-01-01
The Great Southern California ShakeOut was a week of special events featuring the largest earthquake drill in United States history. On November 13, 2008, over 5 million Southern Californians pretended that the magnitude-7.8 ShakeOut scenario earthquake was occurring and practiced actions derived from results of the ShakeOut Scenario, to reduce the impact of a real, San Andreas Fault event. The communications campaign was based on four principles: 1) consistent messaging from multiple sources; 2) visual reinforcement: 3) encouragement of "milling"; and 4) focus on concrete actions. The goals of the Shake-Out established in Spring 2008 were: 1) to register 5 million people to participate in the drill; 2) to change the culture of earthquake preparedness in Southern California; and 3) to reduce earthquake losses in Southern California. Over 90% of the registrants surveyed the next year reported improvement in earthquake preparedness at their organization as a result of the ShakeOut. ?? 2011, Earthquake Engineering Research Institute.
Buchanan-Banks, Jane M.; Collins, Donley S.
1994-01-01
The heavily populated Puget Sound region in the State of Washington has experienced moderate to large earthquakes in the recent past (Nuttli, 1952; Mullineaux and others, 1967). Maps showing thickness of unconsolidated sedimentary deposits are useful aids in delineating areas where damage to engineered structures can result from increased shaking resulting from these earthquakes. Basins containing thick deposits of unconsolidated materials can amplify earthquakes waves and cause far more damage to structures than the same waves passing through bedrock (Singh and others, 1988; Algermissen and others, 1985). Configurations of deep sedimentary basins can also cause reflection and magnification of earthquake waves in ways still not fully understood and presently under investigation (Frankel and Vidale, 1992).
Visualizing the ground motions of the 1906 San Francisco earthquake
Chourasia, A.; Cutchin, S.; Aagaard, Brad T.
2008-01-01
With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism. ?? 2008 Elsevier Ltd.
Simulation of rockfalls triggered by earthquakes
Kobayashi, Y.; Harp, E.L.; Kagawa, T.
1990-01-01
A computer program to simulate the downslope movement of boulders in rolling or bouncing modes has been developed and applied to actual rockfalls triggered by the Mammoth Lakes, California, earthquake sequence in 1980 and the Central Idaho earthquake in 1983. In order to reproduce a movement mode where bouncing predominated, we introduced an artificial unevenness to the slope surface by adding a small random number to the interpolated value of the mid-points between the adjacent surveyed points. Three hundred simulations were computed for each site by changing the random number series, which determined distances and bouncing intervals. The movement of the boulders was, in general, rather erratic depending on the random numbers employed, and the results could not be seen as deterministic but stochastic. The closest agreement between calculated and actual movements was obtained at the site with the most detailed and accurate topographic measurements. ?? 1990 Springer-Verlag.
Earthquake Education in Prime Time
NASA Astrophysics Data System (ADS)
de Groot, R.; Abbott, P.; Benthien, M.
2004-12-01
Since 2001, the Southern California Earthquake Center (SCEC) has collaborated on several video production projects that feature important topics related to earthquake science, engineering, and preparedness. These projects have also fostered many fruitful and sustained partnerships with a variety of organizations that have a stake in hazard education and preparedness. The Seismic Sleuths educational video first appeared in the spring season 2001 on Discovery Channel's Assignment Discovery. Seismic Sleuths is based on a highly successful curriculum package developed jointly by the American Geophysical Union and The Department of Homeland Security Federal Emergency Management Agency. The California Earthquake Authority (CEA) and the Institute for Business and Home Safety supported the video project. Summer Productions, a company with a reputation for quality science programming, produced the Seismic Sleuths program in close partnership with scientists, engineers, and preparedness experts. The program has aired on the National Geographic Channel as recently as Fall 2004. Currently, SCEC is collaborating with Pat Abbott, a geology professor at San Diego State University (SDSU) on the video project Written In Stone: Earthquake Country - Los Angeles. Partners on this project include the California Seismic Safety Commission, SDSU, SCEC, CEA, and the Insurance Information Network of California. This video incorporates live-action demonstrations, vivid animations, and a compelling host (Abbott) to tell the story about earthquakes in the Los Angeles region. The Written in Stone team has also developed a comprehensive educator package that includes the video, maps, lesson plans, and other supporting materials. We will present the process that facilitates the creation of visually effective, factually accurate, and entertaining video programs. We acknowledge the need to have a broad understanding of the literature related to communication, media studies, science education, and hazard response to create a program that is both educational and provides a public service. Seismic Sleuths and Written in Stone are the harbingers of a new genre of earthquake programs that are the antithesis of the 1974 film Earthquake and the 2004 miniseries 10.5. Film producers and those in the earthquake education community are demonstrating that it is possible to tell an exciting story, inspire awareness, and encourage empowerment without sensationalism.
NASA Astrophysics Data System (ADS)
Taymaz, Tuncay; Yolsal-Çevikbilen, Seda; Ulutaş, Ergin
2016-04-01
The finite-fault source rupture models and numerical simulations of tsunami waves generated by 28 October 2012 Queen Charlotte Islands (Mw: 7.8), and 16 September 2015 Illapel-Chile (Mw: 8.3) earthquakes are presented. These subduction zone earthquakes have reverse faulting mechanisms with small amount of strike-slip components which clearly reflect the characteristics of convergence zones. The finite-fault slip models of the 2012 Queen Charlotte and 2015 Chile earthquakes are estimated from a back-projection method that uses teleseismic P- waveforms to integrate the direct P-phase with reflected phases from structural discontinuities near the source. Non-uniform rupture models of the fault plane, which are obtained from the finite fault modeling, are used in order to describe the vertical displacement on seabed. In general, the vertical displacement of water surface was considered to be the same as ocean bottom displacement, and it is assumed to be responsible for the initial water surface deformation gives rise to occurrence of tsunami waves. In this study, it was calculated by using the elastic dislocation algorithm. The results of numerical tsunami simulations are compared with tide gauges and Deep-ocean Assessment and Reporting of Tsunami (DART) buoy records. De-tiding, de-trending, low-pass and high-pass filters were applied to detect tsunami waves in deep ocean sensors and tide gauge records. As an example, the observed records and results of simulations showed that the 2012 Queen Charlotte Islands earthquake generated about 1 meter tsunami-waves in Maui and Hilo (Hawaii), 5 hours and 30 minutes after the earthquake. Furthermore, the calculated amplitudes and time series of the tsunami waves of the recent 2015 Illapel (Chile) earthquake are exhibiting good agreement with the records of tide and DART gauges except at stations Valparaiso and Pichidangui (Chile). This project is supported by The Scientific and Technological Research Council of Turkey (TUBITAK Project No: CAYDAG-114Y066).
NASA Astrophysics Data System (ADS)
Maeda, T.; Furumura, T.; Noguchi, S.; Takemura, S.; Iwai, K.; Lee, S.; Sakai, S.; Shinohara, M.
2011-12-01
The fault rupture of the 2011 Tohoku (Mw9.0) earthquake spread approximately 550 km by 260 km with a long source rupture duration of ~200 s. For such large earthquake with a complicated source rupture process the radiation of seismic wave from the source rupture and initiation of tsunami due to the coseismic deformation is considered to be very complicated. In order to understand such a complicated process of seismic wave, coseismic deformation and tsunami, we proposed a unified approach for total modeling of earthquake induced phenomena in a single numerical scheme based on a finite-difference method simulation (Maeda and Furumura, 2011). This simulation model solves the equation of motion of based on the linear elastic theory with equilibrium between quasi-static pressure and gravity in the water column. The height of tsunami is obtained from this simulation as a vertical displacement of ocean surface. In order to simulate seismic waves, ocean acoustics, coseismic deformations, and tsunami from the 2011 Tohoku earthquake, we assembled a high-resolution 3D heterogeneous subsurface structural model of northern Japan. The area of simulation is 1200 km x 800 km and 120 km in depth, which have been discretized with grid interval of 1 km in horizontal directions and 0.25 km in vertical direction, respectively. We adopt a source-rupture model proposed by Lee et al. (2011) which is obtained by the joint inversion of teleseismic, near-field strong motion, and coseismic deformation. For conducting such a large-scale simulation, we fully parallelized our simulation code based on a domain-partitioning procedure which achieved a good speed-up by parallel computing up to 8192 core processors with parallel efficiency of 99.839%. The simulation result demonstrates clearly the process in which the seismic wave radiates from the complicated source rupture over the fault plane and propagating in heterogeneous structure of northern Japan. Then, generation of tsunami from coseismic ground deformation at sea floor due to the earthquake and propagation is also well demonstrated . The simulation also demonstrates that a very large slip up to 40 m at shallow plate boundary near the trench pushes up sea floor with source rupture propagation, and the highly elevated sea surface gradually start propagation as tsunamis due to the gravity. The result of simulation of vertical-component displacement waveform matches the ocean-bottom pressure gauge record which is installed just above the source fault area (Maeda et al., 2011) very consistently. Strong reverberation of the ocean-acoustic waves between sea surface and sea bottom particularly near the Japan Trench for long time after the source rupture ends is confirmed in the present simulation. Accordingly, long wavetrains of high-frequency ocean acoustic waves is developed and overlap to later tsunami waveforms as we found in the observations.
NASA Astrophysics Data System (ADS)
Pulido, N.; Tavera, H.; Aguilar, Z.; Chlieh, M.; Calderon, D.; Sekiguchi, T.; Nakai, S.; Yamazaki, F.
2012-12-01
We have developed a methodology for the estimation of slip scenarios for megathrust earthquakes based on a model of interseismic coupling (ISC) distribution in subduction margins obtained from geodetic data, as well as information of recurrence of historical earthquakes. This geodetic slip model (GSM) delineates the long wavelength asperities within the megathrust. For the simulation of strong ground motion it becomes necessary to introduce short wavelength heterogeneities to the source slip to be able to efficiently simulate high frequency ground motions. To achieve this purpose we elaborate "broadband" source models constructed by combining the GSM with several short wavelength slip distributions obtained from a Von Karman PSD function with random phases. Our application of the method to Central Andes in Peru, show that this region has presently the potential of generating an earthquake with moment magnitude of 8.9, with a peak slip of 17 m and a source area of approximately 500 km along strike and 165 km along dip. For the strong motion simulations we constructed 12 broadband slip models, and consider 9 possible hypocenter locations for each model. We performed strong motion simulations for the whole central Andes region (Peru), spanning an area from the Nazca ridge (16^o S) to the Mendana fracture (9^o S). For this purpose we use the hybrid strong motion simulation method of Pulido et al. (2004), improved to handle a general slip distribution. Our simulated PGA and PGV distributions indicate that a region of at least 500 km along the coast of central Andes is subjected to a MMI intensity of approximately 8, for the slip model that yielded the largest ground motions among the 12 slip models considered, averaged for all assumed hypocenter locations. This result is in agreement with the macroseismic intensity distribution estimated for the great 1746 earthquake (M~9) in central Andes (Dorbath et al. 1990). Our results indicate that the simulated PGA and PGV for all scenario slips for central Andes, and for an average soil condition, exhibit similar amplitudes and attenuation characteristics with distance as the PGA and PGV values observed during the 2010 Maule (Mw 8.8), and 2011 Tohoku-oki (Mw 9.0) earthquakes. Our results clearly indicate that the simulated ground motions for scenarios with deep rupture nucleations (~40 km) are consistently smaller than the ground motions obtained for shallower rupture nucleations. We also performed strong ground motion simulations in metropolitan Lima by using the aforementioned slip scenarios, and incorporating site amplifications obtained from several microtremors array surveys conducted at representative geotechnical zones in this city. Our simulated PGA and PGV in Lima reach values of 1000 cm/s^2 and 80 cm/s. Our results show that the largest values of PGA (at Puente Piedra district, Northern Lima) are related with short period site effects, whereas the largest values of PGV are related with large site amplifications for periods from 1s to 1.5s (at Callao, Villa el Salvador and La Molina districts). Our results also indicate that the simulated PGA and PGV in central Lima (Parque de la Reserva) are in average 2~3 times larger than the values recorded by a strong motion instrument installed at this location, during the 1974 (Mw8.0) and 1966 (Mw8.0) earthquakes off-shore Lima.
NASA Astrophysics Data System (ADS)
Kaneda, Y.; Ozener, H.
2015-12-01
The 1999 Izumit Earthquake as the destructive earthquake occurred near the Marmara Sea. The Marmara Sea should be focused on because of a seismic gap in the North Anatolian fault. Istanbul is located around the Marmara Sea, so, if next earthquake will occur near Istanbul, fatal damages will be generated. The Japan and Turkey can share our own experiences during past damaging earthquakes and we can prepare for future large earthquakes in cooperation with each other. In earthquakes in Tokyo area and Istanbul area as the destructive earthquakes near high population cities, there are common disaster researches and measures. For disaster mitigation, we are progressing multidisciplinary researches. Our goals of this SATREPS project are as follows, To develop disaster mitigation policy and strategies based on multidisciplinary research activities. To provide decision makers with newly found knowledge for its implementation to the current regulations. To organize disaster education programs in order to increase disaster awareness in Turkey. To contribute the evaluation of active fault studies in Japan. This project is composed of four research groups. The first group is Marmara Earthquake Source region observationally research group. This group has 4 sub-themes such as Seismicity, Geodesy, Electromagnetics and Trench analyses. The second group focuses on scenario researches of earthquake occurrence along the North Anatolia fault and precise tsunami simulation in the Marmara region. Aims of the third group are improvements and constructions of seismic characterizations and damage predictions based on observation researches and precise simulations. The fourth group is promoting disaster educations using research result visuals. In this SATREPS project, we will integrate these research results for disaster mitigation in Marmara region and .disaster education in Turkey. We will have a presentation of the updated results of this SATREPS project.
Borcherdt, R.D.; Glassmoyer, Gary; Cranswick, Edward
1989-01-01
The earthquakes of December 7, 1988, near Spitak, Armenia SSR, serve as another grim reminder of the serious hazard that earthquakes pose throughout the world. We extend our heartfelt sympathies to the families of the earthquake victims and intend that our cooperative scientific endeavours will help reduce losses in future earthquakes. Only through a better understanding of earthquake hazards can earthquake losses be reduced for all peoples in seismically active regions of the world.The tragic consequences of these earthquakes remind scientists and public officials alike of their urgent responsibilities to understand and mitigate the effects of earthquakes. On behalf of the U.S. Geological Survey, I would like to express appreciation to our Soviet colleagues for their kind invitation to participate in joint scientific and engineering studies. Without their cooperation and generous assistance, the conduct of these studies would not have been possible.This report provides seismologic and geologic data collected during the time period December 21, 1988, through February 2, 1989. These data are presented in their entirety to expedite analysis of the data set for inferences regarding hazard mitigation actions, applicable not only in Armenia but other regions of the world exposed to high seismic risk.
Borcherdt, R.D.; Glassmoyer, Gary; Cranswick, Edward
1989-01-01
The earthquakes of December 7, 1988, near Spitak, Armenia SSR, serve as another grim reminder of the serious hazard that earthquakes pose throughout the world. We extend our heartfelt sympathies to the families of the earthquake victims and intend that our cooperative scientific endeavours will help reduce losses in future earthquakes. Only through a better understanding of earthquake hazards can earthquake losses be reduced for all peoples in seismically active regions of the world.The tragic consequences of these earthquakes remind scientists and public officials alike of their urgent responsibilities to understand and mitigate the effects of earthquakes. On behalf of the U.S. Geological Survey, I would like to express appreciation to our Soviet colleagues for their kind invitation to participate in joint scientific and engineering studies. Without their cooperation and generous assistance, the conduct of these studies would not have been possible.This report provides seismologic and geologic data collected during the time period December 21, 1988, through February 2, 1989. These data are presented in their entirety to expedite analysis of the data set for inferences regarding hazard mitigation actions, applicable not only in Armenia but other regions of the world exposed to high seismic risk
Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike
2011-01-01
Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.
NASA Astrophysics Data System (ADS)
Kaneda, Y.; Kawaguchi, K.; Araki, E.; Matsumoto, H.; Nakamura, T.; Nakano, M.; Kamiya, S.; Ariyoshi, K.; Baba, T.; Ohori, M.; Hori, T.; Takahashi, N.; Kaneko, S.; Donet Research; Development Group
2010-12-01
Yoshiyuki Kaneda Katsuyoshi Kawaguchi*, Eiichiro Araki*, Shou Kaneko*, Hiroyuki Matsumoto*, Takeshi Nakamura*, Masaru Nakano*, Shinichirou Kamiya*, Keisuke Ariyoshi*, Toshitaka Baba*, Michihiro Ohori*, Narumi Takakahashi*, and Takane Hori** * Earthquake and Tsunami Research Project for Disaster Prevention, Leading Project , Japan Agency for Marine-Earth Science and Technology (JAMSTEC) **Institute for Research on Earth Evolution, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) DONET (Dense Ocean Floor Network for Earthquakes and Tsunamis) is the real time monitoring system of the Tonankai seismogenic zones around the Nankai trough southwestern Japan. We were starting to develop DONET to perform real time monitoring of crustal activities over there and the advanced early warning system. DONET will provide important and useful data to understand the Nankai trough maga thrust earthquake seismogenic zones and to improve the accuracy of the earthquake recurrence cycle simulation. Details of DONET concept are as follows. 1) Redundancy, Extendable function and advanced maintenance system using the looped cable system, junction boxes and the ROV/AUV. DONET has 20 observatories and incorporated in a double land stations concept. Also, we are developed ROV for the 10km cable extensions and heavy weight operations. 2) Multi kinds of sensors to observe broad band phenomena such as long period tremors, very low frequency earthquakes and strong motions of mega thrust earthquakes over M8: Therefore, sensors such as a broadband seismometer, an accelerometer, a hydrophone, a precise pressure gauge, a differential pressure gauge and a thermometer are equipped with each observatory in DONET. 3) For speedy detections, evaluations and notifications of earthquakes and tsunamis: DONET system will be deployed around the Tonankai seismogenic zone. 4) Provide data of ocean floor crustal deformations derived from pressure sensors: Simultaneously, the development of data assimilation method using DONET data is very important to improve the recurrence cycle simulation model. 5) Understanding of the interaction between the crust and upper mantle around the Nankai trough subduction zone. We will deploy DONET not only in the Tonankai seismogenic zone but also DONET2 with high voltages in the Nankai seismogenic zone western the Nankai trough: The total system will be deployed to understand the seismic linkage between the Tonankai and Nankai earthquakes: Using DONET and DONET2 data, we will be able to observe the crustal activities and before and after slips at the Tonankai earthquake and Nankai earthquake. And we will improve the recurrence cycle simulation model by the advanced data assimilation method. Actually, we constructed one observatory in DONET and observed some earthquakes and tsunamis. We will introduce details of DONET/DONET2 and some observed data.
NASA Astrophysics Data System (ADS)
Tokida, Ken-Ichi; Tanimoto, Ryusuke
In the 2011 Off the Pacific Coast of Tohoku Earthquake, very huge damages of civil engineering structures etc. occurred by tsunami strikes along the rias coast and plane coast of the Pacific Ocean. For the urgent repair and the future reconstruction of these structures, the fundamental damage characteristics of these structures should be clarified by the field surveys from which effective lessons can be expected. In this paper, several important lessons on the resistance characteristics of 13 earth structures such as river dykes and sand banks which are obtained from the field surveys conducted by the authors are indicated. Because many dug pools were formed by the tsunami overflow at the backside of earth embankments and sea walls in this earthquake, the fundamental characteristics of the 10 dug pools are investigated thorough the field survey to estimate the effects of the dug pools quantitatively in the future. Furthermore through the field survey conducted at the representative site named Idoura, the scale and waterbed conditions of the natural canals and the strength characteristics of the river dykes and the base ground neighboring the natural canals are measured in detail and discussed. These fundamental lessons and data on the earth banks and dug pools will be able to be used to simulate the effects of the dug pools and to discuss the artificial canals as one of the hard countermeasures to reduce the tsunami height and/or force.
NASA Astrophysics Data System (ADS)
Yan, Shi; Zhang, Hai
2005-05-01
The magnetorheological (MR) damper is on of the smart controllers used widely in civil engineering structures. These kinds of dampers are applied in the paper in the elevated highway bridge (EHB) with rubber bearing support piers to mitigate damages of the bridge during the severe earthquake ground motion. The dynamic calculating model and equation of motion for the EHB system are set up theoretically and the LQR semi-active control algorithm of seismic response for the EHB system is developed to reduce effectively the responses of the structure. The non-linear calculation model of the piers that rigid degradation is considered and numerical simulative calculation are carried out by Matlab program. The number and location as well as the maximum control forces of the MR dampers, which are the most important parameters for the controlled system, are determined and the rubber bearing and connection forms of the damper play also important rule in the control efficiency. A real EHB structure that is located in Anshan city, Liaoning province in China is used as an example to be calculated under different earthquake records. The results of the calculation show that it is effective to reduce seismic responses of the EHB system by combining the rubber bearing isolation with semi-active MR control technique under the earthquake ground motion. The locations of MR dampers and structural parameters will influence seriously to the effects of structural vibration control.
Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Building Structures
Çelebi, Mehmet
1998-01-01
Several approaches are used to assess the performance of the built environment following an earthquake -- preliminary damage surveys conducted by professionals, detailed studies of individual structures, and statistical analyses of groups of structures. Reports of damage that are issued by many organizations immediately following an earthquake play a key role in directing subsequent detailed investigations. Detailed studies of individual structures and statistical analyses of groups of structures may be motivated by particularly good or bad performance during an earthquake. Beyond this, practicing engineers typically perform stress analyses to assess the performance of a particular structure to vibrational levels experienced during an earthquake. The levels may be determined from recorded or estimated ground motions; actual levels usually differ from design levels. If a structure has seismic instrumentation to record response data, the estimated and recorded response and behavior of the structure can be compared.
Analysis of ground-motion simulation big data
NASA Astrophysics Data System (ADS)
Maeda, T.; Fujiwara, H.
2016-12-01
We developed a parallel distributed processing system which applies a big data analysis to the large-scale ground motion simulation data. The system uses ground-motion index values and earthquake scenario parameters as input. We used peak ground velocity value and velocity response spectra as the ground-motion index. The ground-motion index values are calculated from our simulation data. We used simulated long-period ground motion waveforms at about 80,000 meshes calculated by a three dimensional finite difference method based on 369 earthquake scenarios of a great earthquake in the Nankai Trough. These scenarios were constructed by considering the uncertainty of source model parameters such as source area, rupture starting point, asperity location, rupture velocity, fmax and slip function. We used these parameters as the earthquake scenario parameter. The system firstly carries out the clustering of the earthquake scenario in each mesh by the k-means method. The number of clusters is determined in advance using a hierarchical clustering by the Ward's method. The scenario clustering results are converted to the 1-D feature vector. The dimension of the feature vector is the number of scenario combination. If two scenarios belong to the same cluster the component of the feature vector is 1, and otherwise the component is 0. The feature vector shows a `response' of mesh to the assumed earthquake scenario group. Next, the system performs the clustering of the mesh by k-means method using the feature vector of each mesh previously obtained. Here the number of clusters is arbitrarily given. The clustering of scenarios and meshes are performed by parallel distributed processing with Hadoop and Spark, respectively. In this study, we divided the meshes into 20 clusters. The meshes in each cluster are geometrically concentrated. Thus this system can extract regions, in which the meshes have similar `response', as clusters. For each cluster, it is possible to determine particular scenario parameters which characterize the cluster. In other word, by utilizing this system, we can obtain critical scenario parameters of the ground-motion simulation for each evaluation point objectively. This research was supported by CREST, JST.
Source characterization and dynamic fault modeling of induced seismicity
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Young, R. P.
2017-12-01
In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.
Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results
NASA Astrophysics Data System (ADS)
Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.
2013-12-01
Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.
Variability of recurrence interval for New Zealand surface-rupturing paleoearthquakes
NASA Astrophysics Data System (ADS)
Nicol, A., , Prof; Robinson, R., Jr.; Van Dissen, R. J.; Harvison, A.
2015-12-01
Recurrence interval (RI) for successive earthquakes on individual faults is recorded by paleoseismic datasets for surface-rupturing earthquakes which, in New Zealand, have magnitudes of >Mw ~6 to 7.2 depending on the thickness of the brittle crust. New Zealand faults examined have mean RI of ~130 to 8500 yrs, with an upper bound censored by the sample duration (<30 kyr) and an inverse relationship to fault slip rate. Frequency histograms, probability density functions (PDFs) and coefficient of variation (CoV= standard deviation/arithmetic mean) values have been used to quantify RI variability for geological and simulated earthquakes on >100 New Zealand active faults. RI for individual faults can vary by more than an order of magnitude. CoV of RI for paleoearthquake data comprising 4-10 events ranges from ~0.2 to 1 with a mean of 0.6±0.2. These values are generally comparable to simulated earthquakes (>100 events per fault) and suggest that RI ranges from quasi periodic (e.g., ~0.2-0.5) to random (e.g., ~1.0). Comparison of earthquake simulation and paleoearthquake data indicates that the mean and CoV of RI can be strongly influenced by sampling artefacts including; the magnitude of completeness, the dimensionality of spatial sampling and the duration of the sample period. Despite these sampling issues RI for the best of the geological data (i.e. >6 events) and earthquake simulations are described by log-normal or Weibull distributions with long recurrence tails (~3 times the mean) and provide a basis for quantifying real RI variability (rather than sampling artefacts). Our analysis indicates that CoV of RI is negatively related to fault slip rate. These data are consistent with the notion that fault interaction and associated stress perturbations arising from slip on larger faults are more likely to advance or retard future slip on smaller faults than visa versa.
Detection of co-seismic earthquake gravity field signals using GRACE-like mission simulations
NASA Astrophysics Data System (ADS)
Sharifi, Mohammad Ali; Shahamat, Abolfazl
2017-05-01
After launching the GRACE satellite mission in 2002, the earth's gravity field and its temporal variations are measured with a closer inspection. Although these variations are mainly because of the mass transfer of land water storage, they can also happen due to mass movements related to some natural phenomena including earthquakes, volcanic eruptions, melting of polar ice caps and glacial isostatic adjustment. Therefore this paper shows which parameters of an earthquake are more sensitive to GRACE-Like satellite missions. For this purpose, the parameters of the Maule earthquake that occurred in recent years and Alaska earthquake that occurred in 1964 have been chosen. Then we changed their several parameters to serve our purpose. The GRACE-Like sensitivity is observed by using the simulation of the earthquakes along with gravity changes they caused, as well as using dislocation theory under a half space earth. This observation affects the various faulting parameters which include fault length, width, depth and average slip. These changes were therefore evaluated and the result shows that the GRACE satellite missions tend to be more sensitive to Width among the Length and Width, the other parameter is Dip variations than other parameters. This article can be useful to the upcoming scenario designers and seismologists in their quest to study fault parameters.
NASA Astrophysics Data System (ADS)
Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.
2018-05-01
Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.
The effect of material heterogeneities in long term multiscale seismic cycle simulations
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Richards-Dinger, K. B.; Dieterich, J. H.
2016-12-01
A fundamental part of the simulation of the earthquake cycles in large-scale multicycle earthquake simulators is the pre-computation of elastostatic Greens functions collected into the stiffness matrix (K). The stiffness matrices are typically based on the elastostatic solutions of Okada (1992), Gimbutas et al. (2012), or similar. While these analytic solutions are computationally very fast, they are limited to modeling a homogeneous isotropic half-space. It is thus unknown how such simulations may be affected by material heterogeneity characterizing the earth medium. We are currently working on the estimation of the effects of heterogeneous material properties in the earthquake simulator RSQSim (Richards-Dinger and Dieterich, 2012). In order to do that we are calculating elastostatic solutions in a heterogeneous medium using the Finite Element (FE) method instead of any of the analytical solutions. The investigated region is a 400 x 400 km area centered on the Anza zone in southern California. The fault system geometry is based on that of the UCERF3 deformation models in the area of interest, which we then implement in a finite element mesh using Trelis 15. The heterogeneous elastic structure is based on available tomographic data (seismic wavespeeds and density) for the region (SCEC CVM and Allam et al., 2014). For computation of the Greens functions we are using the open source FE code Defmod (https://bitbucket.org/stali/defmod/wiki/Home) to calculate the elastostatic solutions due to unit slip on each patch. Earthquake slip on the fault plane is implemented through linear constraint equations (Ali et al., 2014, Kyriakopoulos et al., 2013, Aagard et al, 2015) and more specifically with the use of Lagrange multipliers adjunction. The elementary responses are collected into the "heterogeneous" stiffness matrix Khet and used in RSQSim instead of the ones generated with Okada. Finally, we compare the RSQSim results based on the "heterogeneous" Khet with results from Khom (stiffness matrix generated from the same mesh as Khet but using homogeneous material properties). The estimation of the effect of heterogeneous material properties in the seismic cycles simulated by RSQSim is a needed experiment that will allow us to evaluate the impact of heterogeneities in earthquake simulators.
NASA Astrophysics Data System (ADS)
van der Lee, S.; Tekverk, K.; Rooney, K.; Boxerman, J.
2013-12-01
We designed and will present a lesson plan to teach students STEM concepts through seismology. The plan addresses new generation science standards in the Framework for K-12 Science Education as well AAAS Benchmarks for Science Literacy. The plan can be executed at a facility with a seismometer in a research facility or university, on a field trip, but it can also be used in a school setting with a school seismometer. Within the lesson plan, the students first use technology to obtain earthquake location data and map them. Next, the students learn about the science of earthquakes, which is followed by an engineering activity in which the students design a hypothetical seismometer and interact with the actual seismometer and live data display. Lastly the students use mathematics to locate an earthquake through trilateration. The lesson plan has been fine-tuned through implementation with over 150 students from grades 3-12 from the Chicago area.
NASA Astrophysics Data System (ADS)
Todoriki, Masaru; Furumura, Takashi; Maeda, Takuto
2017-01-01
We investigated the effects of sea water on the propagation of seismic waves using a 3-D finite-difference-method simulation of seismic wave propagation following offshore earthquakes. When using a 1-D layered structure, the simulation results showed strong S- to P-wave conversion at the sea bottom; accordingly, S-wave energy was dramatically decreased by the sea water layer. This sea water de-amplification effect had strong frequency dependence, therefore resembling a low-pass filter in which the cut-off frequency and damping coefficients were defined by the thickness of the sea water layer. The sea water also acted to elongate the duration of Rayleigh wave packet. The importance of the sea water layer in modelling offshore earthquakes was further demonstrated by a simulation using a realistic 3-D velocity structure model with and without sea water for a shallow (h = 14 km) outer-rise Nankai Trough event, the 2004 SE Off Kii Peninsula earthquake (Mw = 7.2). Synthetic seismograms generated by the model when sea water was included were in accordance with observed seismograms for long-term longer period motions, particularly those in the shape of Rayleigh waves.
Ground motion-simulations of 1811-1812 New Madrid earthquakes, central United States
Ramirez-Guzman, L.; Graves, Robert; Olsen, Kim B.; Boyd, Oliver; Cramer, Chris H.; Hartzell, Stephen; Ni, Sidao; Somerville, Paul G.; Williams, Robert; Zhong, Jinquan
2015-01-01
The region covered by our simulation domain encompasses a large portion of the CUS centered on the NMSZ, including several major metropolitan areas. Based on our simulations, more than eight million people living and working near the NMSZ would experience potentially damaging ground motion and modified Mercalli intensities ranging from VI to VIII if a repeat of the 1811–1812 earthquakes occurred today. Moreover, the duration of strong ground shaking in the greater Memphis metropolitan area could last from 30 to more than 60 s, depending on the magnitude and epicenter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin; Slaughter, Andrew; Veeraraghavan, Swetha
Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON) is a finite element application that aims at analyzing the response of 3-D soil-structure systems to natural and man-made hazards such as earthquakes, floods and fire. MASTODON currently focuses on the simulation of seismic events and has the capability to perform extensive ‘source-to-site’ simulations including earthquake fault rupture, nonlinear wave propagation and nonlinear soil-structure interaction (NLSSI) analysis. MASTODON is being developed to be a dynamic probabilistic risk assessment framework that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment.
Induced seismicity provides insight into why earthquake ruptures stop.
Galis, Martin; Ampuero, Jean Paul; Mai, P Martin; Cappa, Frédéric
2017-12-01
Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.
Modeling, Forecasting and Mitigating Extreme Earthquakes
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
NASA Astrophysics Data System (ADS)
Bhloscaidh, Mairead Nic; McCloskey, John; Pelling, Mark; Naylor, Mark
2013-04-01
Until expensive engineering solutions become more universally available, the objective targeting of resources at demonstrably effective, low-cost interventions might help reverse the trend of increasing mortality in earthquakes. Death tolls in earthquakes are the result of complex interactions between physical effects, such as the exposure of the population to strong shaking, and the resilience of the exposed population along with supporting critical infrastructures and institutions. The identification of socio-economic factors that contribute to earthquake mortality is crucial to identifying and developing successful risk management strategies. Here we develop a quantitative methodology more objectively to assess the ability of communities to withstand earthquake shaking, focusing on, in particular, those cases where risk management performance appears to exceed or fall below expectations based on economic status. Using only published estimates of the shaking intensity and population exposure for each earthquake, data that is available for earthquakes in countries irrespective of their level of economic development, we develop a model for mortality based on the contribution of population exposure to shaking only. This represents an attempt to remove, as far as possible, the physical causes of mortality from our analysis (where we consider earthquake engineering to reduce building collapse among the socio-economic influences). The systematic part of the variance with respect to this model can therefore be expected to be dominated by socio-economic factors. We find, as expected, that this purely physical analysis partitions countries in terms of basic socio-economic measures, for example GDP, focusing analytical attention on the power of economic measures to explain variance in observed distributions of earthquake risk. The model allows the definition of a vulnerability index which, although broadly it demonstrates the expected income-dependence of vulnerability to strong shaking, also identifies both anomalously resilient and anomalously vulnerable countries. We argue that this approach has the potential to direct sociological investigations to expose the underlying causes of the observed non-economic differentiation of vulnerability. At one level, closer study of the earthquakes represented by these data points might expose local or national interventions which are increasing resilience of communities to strong shaking in the absence of major national investment. Ultimately it may contribute to the development of a quantitative evaluation of risk management effectiveness at the national level that can be used better to target and track risk management investments.
Stability assessment of structures under earthquake hazard through GRID technology
NASA Astrophysics Data System (ADS)
Prieto Castrillo, F.; Boton Fernandez, M.
2009-04-01
This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.
NASA Astrophysics Data System (ADS)
Brizzi, S.; Sandri, L.; Funiciello, F.; Corbi, F.; Piromallo, C.; Heuret, A.
2018-03-01
The observed maximum magnitude of subduction megathrust earthquakes is highly variable worldwide. One key question is which conditions, if any, favor the occurrence of giant earthquakes (Mw ≥ 8.5). Here we carry out a multivariate statistical study in order to investigate the factors affecting the maximum magnitude of subduction megathrust earthquakes. We find that the trench-parallel extent of subduction zones and the thickness of trench sediments provide the largest discriminating capability between subduction zones that have experienced giant earthquakes and those having significantly lower maximum magnitude. Monte Carlo simulations show that the observed spatial distribution of giant earthquakes cannot be explained by pure chance to a statistically significant level. We suggest that the combination of a long subduction zone with thick trench sediments likely promotes a great lateral rupture propagation, characteristic of almost all giant earthquakes.
Rupture evolution of the 2006 Java tsunami earthquake and the possible role of splay faults
NASA Astrophysics Data System (ADS)
Fan, Wenyuan; Bassett, Dan; Jiang, Junle; Shearer, Peter M.; Ji, Chen
2017-11-01
The 2006 Mw 7.8 Java earthquake was a tsunami earthquake, exhibiting frequency-dependent seismic radiation along strike. High-frequency global back-projection results suggest two distinct rupture stages. The first stage lasted ∼65 s with a rupture speed of ∼1.2 km/s, while the second stage lasted from ∼65 to 150 s with a rupture speed of ∼2.7 km/s. High-frequency radiators resolved with back-projection during the second stage spatially correlate with splay fault traces mapped from residual free-air gravity anomalies. These splay faults also colocate with a major tsunami source associated with the earthquake inferred from tsunami first-crest back-propagation simulation. These correlations suggest that the splay faults may have been reactivated during the Java earthquake, as has been proposed for other tsunamigenic earthquakes, such as the 1944 Mw 8.1 Tonankai earthquake in the Nankai Trough.
Ground motion simulations in Marmara (Turkey) region from 3D finite difference method
NASA Astrophysics Data System (ADS)
Aochi, Hideo; Ulrich, Thomas; Douglas, John
2016-04-01
In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.
1996-09-18
One of three Mechanics of Granular Materials (MGM) test cells after flight on STS-79 and before impregnation with resin. Note that the sand column has bulged in the middle, and that the top of the column is several inches lower than the top of the plastic enclosure. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: University of Colorado at Boulder
Mechanics of Granular Materials (MGM) Test Cell
NASA Technical Reports Server (NTRS)
2000-01-01
A test cell for Mechanics of Granular Materials (MGM) experiment is tested for long-term storage with water in the system as plarned for STS-107. This view shows the compressed sand column with the protective water jacket removed. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: University of Colorado at Boulder
Mechanics of Granular Materials (MGM) Cell
NASA Technical Reports Server (NTRS)
1996-01-01
One of three Mechanics of Granular Materials (MGM) test cells after flight on STS-79 and before impregnation with resin. Note that the sand column has bulged in the middle, and that the top of the column is several inches lower than the top of the plastic enclosure. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: University of Colorado at Boulder
Mechanics of Granular Materials (MGM) Test Cell
NASA Technical Reports Server (NTRS)
2000-01-01
A test cell for Mechanics of Granular Materials (MGM) experiment is tested for long-term storage with water in the system as plarned for STS-107. This view shows the top of the sand column with the metal platten removed. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: University of Colorado at Boulder
NASA Astrophysics Data System (ADS)
Sun, Y.; Luo, G.
2017-12-01
Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.
DOT National Transportation Integrated Search
2007-02-01
This document is the conference program of the 5th National Seismic Conference on Bridges and Highways. The conference was held in San Francisco on September 18-20, 2006 and attracted over 300 engineers, academician, and students from around the worl...
NASA Astrophysics Data System (ADS)
Orpin, Alan R.; Rickard, Graham J.; Gerring, Peter K.; Lamarche, Geoffroy
2016-05-01
Devastating tsunami over the last decade have significantly heightened awareness of the potential consequences and vulnerability of low-lying Pacific islands and coastal regions. Our appraisal of the potential tsunami hazard for the atolls of the Tokelau Islands is based on a tsunami source-propagation-inundation model using Gerris Flow Solver, adapted from the companion study by Lamarche et al. (2015) for the islands of Wallis and Futuna. We assess whether there is potential for tsunami flooding on any of the village islets from a selection of 14 earthquake-source experiments. These earthquake sources are primarily based on the largest Pacific earthquakes of Mw ≥ 8.1 since 1950 and other large credible sources of tsunami that may impact Tokelau. Earthquake-source location and moment magnitude are related to tsunami-wave amplitudes and tsunami flood depths simulated for each of the three atolls of Tokelau. This approach yields instructive results for a community advisory but is not intended to be fully deterministic. Rather, the underlying aim is to identify credible sources that present the greatest potential to trigger an emergency response. Results from our modelling show that wave fields are channelled by the bathymetry of the Pacific basin in such a way that the swathes of the highest waves sweep immediately northeast of the Tokelau Islands. Our limited simulations suggest that trans-Pacific tsunami from distant earthquake sources to the north of Tokelau pose the most significant inundation threat. In particular, our assumed worst-case scenario for the Kuril Trench generated maximum modelled-wave amplitudes in excess of 1 m, which may last a few hours and include several wave trains. Other sources can impact specific sectors of the atolls, particularly distant earthquakes from Chile and Peru, and regional earthquake sources to the south. Flooding is dependent on the wave orientation and direct alignment to the incoming tsunami. Our "worst-case" tsunami simulations of the Tokelau Islands suggest that dry areas remain around the villages, which are typically built on a high islet. Consistent with the oral history of little or no perceived tsunami threat, simulations from the recent Tohoku and Chile earthquake sources suggest only limited flooding around low-lying islets of the atoll. Where potential tsunami flooding is inferred from the modelling, recommended minimum evacuation heights above local sea level are compiled, with particular attention paid to variations in tsunami flood depth around the atolls, subdivided into directional quadrants around each atoll. However, complex wave behaviours around the atolls, islets, tidal channels and within the lagoons are also observed in our simulations. Wave amplitudes within the lagoons may exceed 50 cm, increasing any inundation and potential hazards on the inner shoreline of the atolls, which in turn may influence evacuation strategies. Our study shows that indicative simulation studies can be achieved even with only basic field information. In part, this is due to the spatially and vertically limited topography of the atoll, short reef flat and steep seaward bathymetry, and the simple depth profile of the lagoon bathymetry.
Hays, Walter W.
1979-01-01
In accordance with the provisions of the Earthquake Hazards Reduction Act of 1977 (Public Law 95-124), the U.S. Geological Survey has developed comprehensive plans for producing information needed to assess seismic hazards and risk on a national scale in fiscal years 1980-84. These plans are based on a review of the needs of Federal Government agencies, State and local government agencies, engineers and scientists engaged in consulting and research, professional organizations and societies, model code groups, and others. The Earthquake Hazards Reduction Act provided an unprecedented opportunity for participation in a national program by representatives of State and local governments, business and industry, the design professions, and the research community. The USGS and the NSF (National Science Foundation) have major roles in the national program. The ultimate goal of the program is to reduce losses from earthquakes. Implementation of USGS research in the Earthquake Hazards Reduction Program requires the close coordination of responsibility between Federal, State and local governments. The projected research plan in national seismic hazards and risk for fiscal years 1980-84 will be accomplished by USGS and non-USGS scientists and engineers. The latter group will participate through grants and contracts. The research plan calls for (1) national maps based on existing methods, (2) improved definition of earthquake source zones nationwide, (3) development of improved methodology, (4) regional maps based on the improved methodology, and (5) post-earthquake investigations. Maps and reports designed to meet the needs, priorities, concerns, and recommendations of various user groups will be the products of this research and provide the technical basis for improved implementation.
NASA Astrophysics Data System (ADS)
Wein, A. M.; Berryman, K. R.; Jolly, G. E.; Brackley, H. L.; Gledhill, K. R.
2015-12-01
The 2010-2011 Canterbury Earthquake Sequence began with the 4th September 2010 Darfield earthquake (Mw 7.1). Perhaps because there were no deaths, the mood of the city and the government was that high standards of earthquake engineering in New Zealand protected us, and there was a confident attitude to response and recovery. The demand for science and engineering information was of interest but not seen as crucial to policy, business or the public. The 22nd February 2011 Christchurch earthquake (Mw 6.2) changed all that; there was a significant death toll and many injuries. There was widespread collapse of older unreinforced and two relatively modern multi-storey buildings, and major disruption to infrastructure. The contrast in the interest and relevance of the science could not have been greater compared to 5 months previously. Magnitude 5+ aftershocks over a 20 month period resulted in confusion, stress, an inability to define a recovery trajectory, major concerns about whether insurers and reinsurers would continue to provide cover, very high levels of media interest from New Zealand and around the world, and high levels of political risk. As the aftershocks continued there was widespread speculation as to what the future held. During the sequence, the science and engineering sector sought to coordinate and offer timely and integrated advice. However, other than GeoNet, the national geophysical monitoring network, there were few resources devoted to communication, with the result that it was almost always reactive. With hindsight we have identified the need to resource information gathering and synthesis, execute strategic assessments of stakeholder needs, undertake proactive communication, and develop specific information packages for the diversity of users. Overall this means substantially increased resources. Planning is now underway for the science sector to adopt the New Zealand standardised CIMS (Coordinated Incident Management System) structure for management and communication during a crisis, which should help structure and resource the science response needs in future major events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, T.C.; Swanger, H.J.; Shkoller, B.
1981-07-01
This report summarizes three efforts performed during the past fiscal year. The first these efforts is a study of the theoretical behavior of the regional seismic phase Lg in various tectonic provinces. Synthetic seismograms are used to determine the sensitivity of Lg to source and medium properties. The primary issues addressed concern the relationship of regional Lg characteristics to the crustal attenuation properties, the comparison of the Lg in many crustal structures and the source depth dependence of Lg. The second effort described is an expansion of hte capabilities of the three-dimensional finite difference code TRES. The present capabilities aremore » outlined with comparisons of the performance of the code on three computer systems. The last effort described is the development of an algorithm for simulation of the near-field ground motions from the 1971 San Fernando, California, earthquake. A computer code implementing this algorithm has been provided to the Mission Research Corporation foe simulation of the acoustic disturbances from such an earthquake.« less
Progress in Computational Simulation of Earthquakes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert
2006-01-01
GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).
Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management
Jaiswal, Kishor; Wald, David J.
2008-01-01
Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the contribution of building stock, its relative vulnerability, and distribution are vital components for determining the extent of casualties during an earthquake. It is evident from large deadly historical earthquakes that the distribution of vulnerable structures and their occupancy level during an earthquake control the severity of human losses. For example, though the number of strong earthquakes in California is comparable to that of Iran, the total earthquake-related casualties in California during the last 100 years are dramatically lower than the casualties from several individual Iranian earthquakes. The relatively low casualties count in California is attributed mainly to the fact that more than 90 percent of the building stock in California is made of wood and is designed to withstand moderate to large earthquakes (Kircher, Seligson and others, 2006). In contrast, the 80 percent adobe and or non-engineered masonry building stock with poor lateral load resisting systems in Iran succumbs even for moderate levels of ground shaking. Consequently, the heavy death toll for the 2003 Bam, Iran earthquake, which claimed 31,828 lives (Ghafory-Ashtiany and Mousavi, 2005), is directly attributable to such poorly resistant construction, and future events will produce comparable losses unless practices change. Similarly, multistory, precast-concrete framed buildings caused heavy casualties in the 1988 Spitak, Armenia earthquake (Bertero, 1989); weaker masonry and reinforced-concrete framed construction designed for gravity loads with soft first stories dominated losses in the Bhuj, India earthquake of 2001 (Madabhushi and Haigh, 2005); and adobe and weak masonry dwellings in Peru controlled the death toll in the Peru earthquake of 2007 (Taucer, J. and others, 2007). Spence (2007) after conducting a brief survey of most lethal earthquakes since 1960 found that building collapses remains a major cause of earthquake mortality and unreinforced masonry buildings are one of the mos
NASA Astrophysics Data System (ADS)
Yang, Kun; Xu, Quan-li; Peng, Shuang-yun; Cao, Yan-bo
2008-10-01
Based on the necessity analysis of GIS applications in earthquake disaster prevention, this paper has deeply discussed the spatial integration scheme of urban earthquake disaster loss evaluation models and visualization technologies by using the network development methods such as COM/DCOM, ActiveX and ASP, as well as the spatial database development methods such as OO4O and ArcSDE based on ArcGIS software packages. Meanwhile, according to Software Engineering principles, a solution of Urban Earthquake Emergency Response Decision Support Systems based on GIS technologies have also been proposed, which include the systems logical structures, the technical routes,the system realization methods and function structures etc. Finally, the testing systems user interfaces have also been offered in the paper.
Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations
Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.
2017-01-01
Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Nakamura, T.; Miyoshi, T.
2015-12-01
May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.
2016-12-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.
NASA Astrophysics Data System (ADS)
Perry, S.; Benthien, M.; Jordan, T. H.
2005-12-01
The SCEC/UseIT internship program is training the next generation of earthquake scientist, with methods that can be adapted to other disciplines. UseIT interns work collaboratively, in multi-disciplinary teams, conducting computer science research that is needed by earthquake scientists. Since 2002, the UseIT program has welcomed 64 students, in some two dozen majors, at all class levels, from schools around the nation. Each summer''s work is posed as a ``Grand Challenge.'' The students then organize themselves into project teams, decide how to proceed, and pool their diverse talents and backgrounds. They have traditional mentors, who provide advice and encouragement, but they also mentor one another, and this has proved to be a powerful relationship. Most begin with fear that their Grand Challenge is impossible, and end with excitement and pride about what they have accomplished. The 22 UseIT interns in summer, 2005, were primarily computer science and engineering majors, with others in geology, mathematics, English, digital media design, physics, history, and cinema. The 2005 Grand Challenge was to "build an earthquake monitoring system" to aid scientists who must visualize rapidly evolving earthquake sequences and convey information to emergency personnel and the public. Most UseIT interns were engaged in software engineering, bringing new datasets and functionality to SCEC-VDO (Virtual Display of Objects), a 3D visualization software that was prototyped by interns last year, using Java3D and an extensible, plug-in architecture based on the Eclipse Integrated Development Environment. Other UseIT interns used SCEC-VDO to make animated movies, and experimented with imagery in order to communicate concepts and events in earthquake science. One movie-making project included the creation of an assessment to test the effectiveness of the movie''s educational message. Finally, one intern created an interactive, multimedia presentation of the UseIT program.
Stochastic ground-motion simulations for the 2016 Kumamoto, Japan, earthquake
NASA Astrophysics Data System (ADS)
Zhang, Long; Chen, Guangqi; Wu, Yanqiang; Jiang, Han
2016-11-01
On April 15, 2016, Kumamoto, Japan, was struck by a large earthquake sequence, leading to severe casualty and building damage. The stochastic finite-fault method based on a dynamic corner frequency has been applied to perform ground-motion simulations for the 2016 Kumamoto earthquake. There are 53 high-quality KiK-net stations available in the Kyushu region, and we employed records from all stations to determine region-specific source, path and site parameters. The calculated S-wave attenuation for the Kyushu region beneath the volcanic and non-volcanic areas can be expressed in the form of Q s = (85.5 ± 1.5) f 0.68±0.01 and Q s = (120 ± 5) f 0.64±0.05, respectively. The effects of lateral S-wave velocity and attenuation heterogeneities on the ground-motion simulations were investigated. Site amplifications were estimated using the corrected cross-spectral ratios technique. Zero-distance kappa filter was obtained to be the value of 0.0514 ± 0.0055 s, using the spectral decay method. The stress drop of the mainshock based on the USGS slip model was estimated optimally to have a value of 64 bars. Our finite-fault model with optimized parameters was validated through the good agreement of observations and simulations at all stations. The attenuation characteristics of the simulated peak ground accelerations were also successfully captured by the ground-motion prediction equations. Finally, the ground motions at two destructively damaged regions, Kumamoto Castle and Minami Aso village, were simulated. We conclude that the stochastic finite-fault method with well-determined parameters can reproduce the ground-motion characteristics of the 2016 Kumamoto earthquake in both the time and frequency domains. This work is necessary for seismic hazard assessment and mitigation.[Figure not available: see fulltext.
Tsunami Elevation Predictions for American Samoa.
1980-09-01
tide gauge of Pago Pago after the earthquake of May 13, 1953 in Costa Rica . (Microfische Collec- tion of Tsunami Mareograms 1952-1975) July 13, 1952...34 Engineering Geology Case Histories, Geological Society of America, No. 8. Chandrasekhar, S. 1943. Reviews of Modern Physics, 15:1-89. Chen, H. S., and...Scientific abstracts and indexes relevant to earthquakes, tsunamis, and geology were also reviewed. Since there are no cumulative indexes available in most
Texas Should Require Homeland Security Standards for High-Speed Rail
2015-12-01
conditions. Japanese trains, engineered with earthquakes in mind, all came to a safe stop during the 2011 Fukushima disaster without loss of life or...building—that devastated parts of Japan through immediate effect as well as caused the consequential breach of the Fukushima nuclear reactor.119...119 Ichiro Fujisaki, “Japan’s Recovery Six Months after the Earthquake, Tsunami and Nuclear Crisis,” Brookings Institution, last modified September
3D Bedrock Structure of Bornova Plain and Its surroundings (İzmir/Western Turkey)
NASA Astrophysics Data System (ADS)
Pamuk, Eren; Gönenç, Tolga; Özdağ, Özkan Cevdet; Akgün, Mustafa
2018-01-01
An earthquake record is needed on engineering bedrock to perform soil deformation analysis. This record could be obtained in different ways (seismographs on engineering bedrock; by the help of the soil transfer function; scenario earthquakes). S-wave velocity ( V s) profile must be known at least till engineering bedrock for calculating soil transfer functions true and completely. In addition, 2D or 3D soil, engineering-seismic bedrock models are needed for soil response analyses to be carried out. These models are used to determine changes in the amplitude and frequency content of earthquake waves depending on the seismic impedance from seismic bedrock to the ground surface and the basin effects. In this context, it is important to use multiple in situ geophysical techniques to create the soil-bedrock models. In this study, 2D and 3D soil-bedrock models of Bornova plain and its surroundings (Western Turkey), which are very risky in terms of seismicity, were obtained by combined survey of surface wave and microgravity methods. Results of the study show that the engineering bedrock depths in the middle part of Bornova plain range from 200 to 400 m and the southern and northern parts which are covered limestone and andesite show the engineering bedrock ( V s > 760 m/s) feature. In addition, seismic bedrock ( V s < 3000 m/s) depth changes from 550 to 1350 m. The predominant period values obtained from single station microtremor method change from 0.45 to 1.6 s while they are higher than 1 s in the middle part of Bornova plain where the basin is deeper. Bornova Plain has a very thick sediment units which have very low V s values above engineering bedrock. In addition, it is observed sudden changes at the interfaces of the layer in horizontal and vertical directions.
A stochastic automata network for earthquake simulation and hazard estimation
NASA Astrophysics Data System (ADS)
Belubekian, Maya Ernest
1998-11-01
This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto. The sensitivity analysis of the model results to the variation in basic parameters shows that the maximum magnitude has the most significant impact on the hazard, especially for long forecast periods.
NASA Astrophysics Data System (ADS)
Baba, T.; Ashi, J.; Kanamatsu, T.; Imai, K.; Yamashita, K.
2017-12-01
"SHINCHO-KI" is an ancient document that records tsunami damages caused by the 1512 Eisho earthquake, the 1605 Keicho earthquake, the 1707 Hoei earthquake and the 1854 Ansei-Nankai earthquake in Shishikui, where is located along the coast of the southeastern part of Shikoku, facing to the Nankai trough. According to SHINCHO-KI, 3700 people were dead in Shishikui by the tsunami during the 1512 Eisho earthquake. However, no evidence was found for the occurrence of the 1512 Eisho earthquake except for SHINCHO-KI, while the other earthquakes were recorded in many ancient documents in the southwestern Japan. To investigate the source mechanism of the 1512 Eisho earthquake, we carefully read a bathymetric chart and found a scarp with a height of about 400 m and a width of about 6000 m at a position about 24 km offshore in the southeastern direction from Shishikui. We also carried out a survey by using a deep-towed sub-bottom profiler (SBP) on ROV NSS during the R/V Hakuho-maru KH-16-5 cruise. The result shows detailed structures possibly caused by a recent landslide. The vertical displacement of the strata was measured to be about 50 m. By considering these results, we simulated the 1512 Eisho tsunami generated by a submarine mass failure. The topographic data in Shishikui which is needed in the calculation was made from the present data. But we removed the artificial structures such as wave breakers and altered coastlines by referring to old map images. In the numerical simulation, the initial sea surface deformation was obtained by the method proposed by Watts et al. (2005), and the tsunami propagation was calculated by solving the nonlinear shallow water equations with dispersive (Boussinesq) term on a finite difference scheme. We solved the advection terms by using the third-order upwind difference to avoid artificial viscosity. The numerical simulation estimated the maximum tsunami height of about 6m and moderate inundation on land in Shishikui by the 1512 Eisho tsunami.
Insights into earthquake hazard map performance from shaking history simulations
NASA Astrophysics Data System (ADS)
Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.
2017-12-01
Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher-than-mapped shaking — arises by chance or reflects biases in the map. Due to this problem, there are limits to how well we can expect hazard maps to predict future shaking, as well as to our ability to test the performance of a hazard map based on available observations.
Barbour, Andrew J.; Norbeck, Jack H.; Rubinstein, Justin L.
2017-01-01
The 2016 Mw 5.8 Pawnee earthquake occurred in a region with active wastewater injection into a basal formation group. Prior to the earthquake, fluid injection rates at most wells were relatively steady, but newly collected data show significant increases in injection rate in the years leading up to earthquake. For the same time period, the total volumes of injected wastewater were roughly equivalent between variable‐rate and constant‐rate wells. To understand the possible influence of these changes in injection, we simulate the variable‐rate injection history and its constant‐rate equivalent in a layered poroelastic half‐space to explore the interplay between pore‐pressure effects and poroelastic effects on the fault leading up to the mainshock. In both cases, poroelastic stresses contribute a significant proportion of Coulomb failure stresses on the fault compared to pore‐pressure increases alone, but the resulting changes in seismicity rate, calculated using a rate‐and‐state frictional model, are many times larger when poroelastic effects are included, owing to enhanced stressing rates. In particular, the variable‐rate simulation predicts more than an order of magnitude increase in seismicity rate above background rates compared to the constant‐rate simulation with equivalent volume. The observed cumulative density of earthquakes prior to the mainshock within 10 km of the injection source exhibits remarkable agreement with seismicity predicted by the variable‐rate injection case.
Earthquake hazards on the cascadia subduction zone.
Heaton, T H; Hartzell, S H
1987-04-10
Large subduction earthquakes on the Cascadia subduction zone pose a potential seismic hazard. Very young oceanic lithosphere (10 million years old) is being subducted beneath North America at a rate of approximately 4 centimeters per year. The Cascadia subduction zone shares many characteristics with subduction zones in southern Chile, southwestern Japan, and Colombia, where comparably young oceanic lithosphere is also subducting. Very large subduction earthquakes, ranging in energy magnitude (M(w)) between 8 and 9.5, have occurred along these other subduction zones. If the Cascadia subduction zone is also storing elastic energy, a sequence of several great earthquakes (M(w) 8) or a giant earthquake (M(w) 9) would be necessary to fill this 1200-kilometer gap. The nature of strong ground motions recorded during subduction earthquakes of M(w) less than 8.2 is discussed. Strong ground motions from even larger earthquakes (M(w) up to 9.5) are estimated by simple simulations. If large subduction earthquakes occur in the Pacific Northwest, relatively strong shaking can be expected over a large region. Such earthquakes may also be accompanied by large local tsunamis.
Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur
2017-01-01
We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85–104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095–2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0–20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1–3 s where the IM2011 motions are about 20–30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1–3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study. We concluded that IM2011 rupture generator performs well in ground motion simulations using Graves and Pitarka hybrid method. Therefore, we recommend it to be considered for inclusion into the framework used by the Southern California Earthquake Center broadband simulation platform.
NASA Astrophysics Data System (ADS)
Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur
2017-09-01
We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85-104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095-2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0-20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1-10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1-3 s where the IM2011 motions are about 20-30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1-3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study. We concluded that IM2011 rupture generator performs well in ground motion simulations using Graves and Pitarka hybrid method. Therefore, we recommend it to be considered for inclusion into the framework used by the Southern California Earthquake Center broadband simulation platform.
Build an Earthquake City! Grades 6-8.
ERIC Educational Resources Information Center
Rushton, Erik; Ryan, Emily; Swift, Charles
In this activity, students build a city out of sugar cubes, bouillon cubes, and gelatin cubes. The city is then put through simulated earthquakes to see which cube structures withstand the shaking the best. This activity requires a 50-minute time period for completion. (Author/SOE)
Stephenson, W.J.; Frankel, A.D.; Odum, J.K.; Williams, R.A.; Pratt, T.L.
2006-01-01
A shallow bedrock fold imaged by a 1.3-km long high-resolution shear-wave seismic reflection profile in west Seattle focuses seismic waves arriving from the south. This focusing may cause a pocket of amplified ground shaking and the anomalous chimney damage observed in earthquakes of 1949, 1965 and 2001. The 200-m bedrock fold at ???300-m depth is caused by deformation across an inferred fault within the Seattle fault zone. Ground motion simulations, using the imaged geologic structure and northward-propagating north-dipping plane wave sources, predict a peak horizontal acceleration pattern that matches that observed in strong motion records of the 2001 Nisqually event. Additionally, a pocket of chimney damage reported for both the 1965 and the 2001 earthquakes generally coincides with a zone of simulated amplification caused by focusing. This study further demonstrates the significant impact shallow (<1km) crustal structures can have on earthquake ground-motion variability.
NASA Astrophysics Data System (ADS)
Karabulut, Savas; Cinku, Mualla; Tezel, Okan; Hisarli, Mumtaz; Ozcep, Ferhat; Tun, Muammer; Avdan, Ugur; Ozel, Oguz; Acikca, Ahmet; Aygordu, Ozan; Benli, Aral; Kesisyan, Arda; Yilmaz, Hakan; Varici, Cagri; Ozturkan, Hasan; Ozcan, Cuneyt; Kivrak, Ali
2015-04-01
Social Responsibility Projects (SRP) are important tools in contributing to the development of communities and applied educational science. Researchers dealing with engineering studies generally focus on technical specifications. However, when the subject depends on earthquake, engineers should be consider also social and educational components, besides the technical aspects. If scientific projects collaborated with municipalities of cities, it should be known that it will reach a wide range of people. Turkey is one of the most active region that experienced destructive earthquakes. The 1999 Marmara earthquake was responsible for the loose of more than 18.000 people. The destructive damage occurred on buildings that made on problematic soils. This however, is still the one of most important issues in Turkey which needs to be solved. Inspite of large earthquakes that occurred along the major segments of the North and East Anatolian Fault Zones due to the northwards excursion of Anatolia, the extensional regime in the Aegean region is also characterized by earthquakes that occurred with the movement of a number of strike slip and normal faults. The Dikili village within the Eastern Aegean extensional region experienced a large earthquake in 1939 (M: 6.8). The seismic activity is still characterised by high level and being detected. A lot of areas like the Kabakum village have been moved to its present location during this earthquake. The probability of an earthquake hazard in Dikili is considerably high level, today. Therefore, it is very important to predict the soil behaviour and engineering problems by using Geographic Information System (GIS) tools in this area. For this purpose we conducted a project with the collaboration of the Dikili Municipality in İzmir (Turkey) to determine the following issues: a) Possible disaster mitigation as a result of earthquake-soil-structure interaction, b) Geo-enginnering problems (i.e: soil liquefaction, soil settlement, soil bearing capacity, soil amplification), c) The basin structure and possible fault of the Dikili district, d) Risk analysis on cultivated areas due to salty water injection, e) The tectonic activity of the study area from Miocene to present. During this study a number of measurements were carried out to solve the problems defined above. These measurements include; microtremor single station (H/V) method according to Nakamura's technique, which is applied at 222 points. The results provide maps of soil fundamental frequency, soil amplification and soil sedimentary thickness by using developed amprical relationships. Spatial Autocorrelation Technique (SPAC) was carried out in 11 sites with Guralp CG-5 seismometer to predict the shear wave velocity-depth model towards the sismological bedrock. Multi-channel analysis of Surface Wave (MASW), Microtremor Array Method (MAM) and Seismic Refraction Method were applied at 121 sites with SARA-Doremi Seismograph. The soil liquefaction-induced settlements are determined in the frame of shallow soil engineering problems. Vertical Electrical Sounding (VES) was carried out to define the presence of saltly and drinkable and hot/cold underground water, the location of possible faults and the bedrock depth which was estimated with a Scientrex Saris Resistivity Equipment. To define the areas which are influenced by salty water, induced polarization (IP) method was applied at 34 sites. The basin structure and the probably faults of the study area were determined by applying gravity measurements on 248 points with a CG-5 Autogravity meter. Evaluation of the combined data is very important for producing microzonation maps. We therefore integrated all of the data into the GIS database and prepared large variety of maps.
PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.
Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming
2016-05-31
Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
Earthquakes trigger the loss of groundwater biodiversity
NASA Astrophysics Data System (ADS)
Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero
2014-09-01
Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.
NASA Astrophysics Data System (ADS)
Norbeck, Jack H.; Horne, Roland N.
2018-05-01
The maximum expected earthquake magnitude is an important parameter in seismic hazard and risk analysis because of its strong influence on ground motion. In the context of injection-induced seismicity, the processes that control how large an earthquake will grow may be influenced by operational factors under engineering control as well as natural tectonic factors. Determining the relative influence of these effects on maximum magnitude will impact the design and implementation of induced seismicity management strategies. In this work, we apply a numerical model that considers the coupled interactions of fluid flow in faulted porous media and quasidynamic elasticity to investigate the earthquake nucleation, rupture, and arrest processes for cases of induced seismicity. We find that under certain conditions, earthquake ruptures are confined to a pressurized region along the fault with a length-scale that is set by injection operations. However, earthquakes are sometimes able to propagate as sustained ruptures outside of the zone that experienced a pressure perturbation. We propose a faulting criterion that depends primarily on the state of stress and the earthquake stress drop to characterize the transition between pressure-constrained and runaway rupture behavior.
Earthquakes trigger the loss of groundwater biodiversity.
Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero
2014-09-03
Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.
Incorporating natural hazard assessments into municipal master-plans; case-studies from Israel
NASA Astrophysics Data System (ADS)
Katz, Oded
2010-05-01
The active Dead Sea Rift (DSR) runs along the length of Israel, making the entire state susceptible to earthquake-related hazards. Current building codes generally acknowledge seismic hazards and direct engineers towards earthquake-resistant structures. However, hazard mapping on a scale fit for municipal/governmental planning is subject to local initiative and is currently not mandatory as seems necessary. In the following, a few cases of seismic-hazard evaluation made by the Geological Survey of Israel are presented, emphasizing the reasons for their initiation and the way results were incorporated (or not). The first case is a seismic hazard qualitative micro-zonation invited by the municipality of Jerusalem as part of a new master plan. This work resulted in maps (1:50,000; GIS format) identifying areas prone to (1) amplification of seismic shaking due to site characteristics (outcrops of soft rocks or steep topography) and (2) sites with earthquake induced landslide (EILS) hazard. Results were validated using reports from the 1927, M=6.2 earthquake that originated along the DSR about 30km east of Jerusalem. Although the hazard maps were accepted by municipal authorities, practical use by geotechnical engineers working within the frame of the new master-plan was not significant. The main reason for that is apparently a difference of opinion between the city-engineers responsible for implementing the new master-plan and the geologists responsible of the hazard evaluation. The second case involves evaluation of EILS hazard for two towns located further north along the DSR, Zefat and Tiberias. Both were heavily damaged more than once by strong earthquakes in past centuries. Work was carried out as part of a governmental seismic-hazard reduction program. The results include maps (1:10,000 scales) of sites with high EILS hazard identified within city limits. Maps (in GIS format) were sent to city engineers with reports explaining the methods and results. As far as we know, widespread implementation of the maps within municipal master plans never came about, and there was no open discussion between city engineers and the Geological Survey. The main reasons apparently are (1) a lack, until recently, of mandatory building codes requiring incorporation of EILS hazard; (2) budget priorities; (3) failure to involve municipality personnel in planning and executing the EILS hazard evaluation. These cases demonstrate that for seismic hazard data to be incorporated and implemented within municipal master-plans there needs to be (1) active involvement of municipal officials and engineers from the early planning stages of the evaluation campaign, and (2) a-priori dedication of funds towards implementation of evaluation results.
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.
2017-12-01
Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.
Induced seismicity provides insight into why earthquake ruptures stop
Galis, Martin; Ampuero, Jean Paul; Mai, P. Martin; Cappa, Frédéric
2017-01-01
Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures. PMID:29291250
NASA Astrophysics Data System (ADS)
Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang
2017-05-01
In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.
NASA Astrophysics Data System (ADS)
Bayraktar, Başak; Özer Sözdinler, Ceren; Necmioǧlu, Öcal; Meral Özel, Nurcan
2017-04-01
The Marmara Sea and its surrounding is one of the most populated areas in Turkey. Many densely populated cities, such as megacity Istanbul with a population of more than 14 million, a great number of industrial facilities in largest capacity and potential, refineries, ports and harbors are located along the coasts of Marmara Sea. The region is highly seismically active. There has been a wide range of studies in this region regarding the fault mechanisms, seismic activities, earthquakes and triggered tsunamis in the Sea of Marmara. The historical documents reveal that the region has been experienced many earthquakes and tsunamis in the past. According to Altinok et al. (2011), 35 tsunami events happened in Marmara Sea between BC 330 and 1999. As earthquakes are expected in Marmara Sea with the break of segments of North Anatolian Fault (NAF) in the future, the region should be investigated in terms of the possibility of tsunamis by the occurrence of earthquakes in specific return periods. This study aims to make probabilistic tsunami hazard analysis in Marmara Sea. For this purpose, the possible sources of tsunami scenarios are specified by compiling the earthquake catalogues, historical records and scientific studies conducted in the region. After compiling all this data, a synthetic earthquake and tsunami catalogue are prepared using Monte Carlo simulations. For specific return periods, the possible epicenters, rupture lengths, widths and displacements are determined with Monte Carlo simulations assuming the angles of fault segments as deterministic. For each earthquake of synthetic catalogue, the tsunami wave heights will be calculated at specific locations along Marmara Sea. As a further objective, this study will determine the tsunami hazard curves for specific locations in Marmara Sea including the tsunami wave heights and their probability of exceedance. This work is supported by SATREPS-MarDim Project (Earthquake and Tsunami Disaster Mitigation in the Marmara Region and Disaster Education in Turkey) and JICA (Japan International Cooperation Agency). The authors would like to acknowledge the project MARsite - New Directions in Seismic Hazard assessment through Focused Earth Observation in the Marmara Supersite (FP7-ENV.2012 6.4-2, Grant 308417 - see NH2.3/GMPV7.4/SM7.7). The authors also would like to acknowledge Prof. Dr. Mustafa Erdik and Prof. Dr. Sinan Akkar for their valuable feedback and guidance throughout this study.
2008 United States National Seismic Hazard Maps
Petersen, M.D.; ,
2008-01-01
The U.S. Geological Survey recently updated the National Seismic Hazard Maps by incorporating new seismic, geologic, and geodetic information on earthquake rates and associated ground shaking. The 2008 versions supersede those released in 1996 and 2002. These maps are the basis for seismic design provisions of building codes, insurance rate structures, earthquake loss studies, retrofit priorities, and land-use planning. Their use in design of buildings, bridges, highways, and critical infrastructure allows structures to better withstand earthquake shaking, saving lives and reducing disruption to critical activities following a damaging event. The maps also help engineers avoid costs from over-design for unlikely levels of ground motion.
Studies of Fault Interactions and Regional Seismicity Using Numerical Simulations
NASA Astrophysics Data System (ADS)
Yikilmaz, Mehmet Burak
Numerical simulations are routinely used for weather and climate forecasting. It is desirable to simulate regional seismicity for seismic hazard analysis. One such simulation tool is the Virtual California earthquake simulator. We have used Virtual California (VC) to study various aspects of fault interaction and analyzed the statistics of earthquake recurrence times and magnitudes generated synthetically. The first chapter of this dissertation investigates the behavior of seismology simulations using three relatively simple models involving a straight strike-slip fault. We show that a series of historical earthquakes observed along the Nankai Trough in Japan exhibit similar patterns to those obtained in our model II. In the second chapter we utilize Virtual California to study regional seismicity in northern California. We generate synthetic catalogs of seismicity using a composite simulation. We use these catalogs to analyze frequency-magnitude and recurrence interval statistics on both a regional and fault specific level and compare our modeled rates of seismicity and spatial variability with observations. The final chapter explores the jump distance for a propagating rupture over a stepping strike-slip fault. Our study indicates that between 2.5 and 5.5 km of the separation distance, the percentage of events that jump from one fault to the next decreases significantly. We find that these step-over distance values are in good agreement with geologically observed values.
New Zealand’s deadliest quake sounds alarm for cities on fault lines
Kalkan, Erol
2012-01-01
The catastrophic Christ Church Earthquake is a strong reminder to engineers and scientists of the hazards pose by fault lines, both mapped and unknown, near major cities. In February 2011, the relatively moderate earthquake that struck the cities of Christchurch and Lyttleton in the Canterbury region of New Zealand's South Island surprised many with its destructive power. The magnitude 6.2 temblor killed 181 people, 118 of whom were killed in the collapse of a single building in the city center. The quake damaged or destroyed more than 100,000 buildings.It was the deadliest quake to strike the nation in 80 years-since the 1931 earthquake that struck the Napier and Hastings area of the North Island. The Christchurch quake was part of the aftershock sequence following the September 2010 magnitude 7.1 earthquake near Darfield, 40 kilometers west of the city. The Darfield earthquake was in a sparsely populated area, causing to loss of life. By contrast, the Christchurch earthquake was generated on a fault in close proximity to the city.
"Did you feel it?" Intensity data: A surprisingly good measure of earthquake ground motion
Atkinson, G.M.; Wald, D.J.
2007-01-01
The U.S. Geological Survey is tapping a vast new source of engineering seismology data through its "Did You Feel It?" (DYFI) program, which collects online citizen responses to earthquakes. To date, more than 750,000 responses have been compiled in the United States alone. The DYFI data make up in quantity what they may lack in scientific quality and offer the potential to resolve longstanding issues in earthquake ground-motion science. Such issues have been difficult to address due to the paucity of instrumental ground-motion data in regions of low seismicity. In particular, DYFI data provide strong evidence that earthquake stress drops, which control the strength of high-frequency ground shaking, are higher in the central and eastern United States (CEUS) than in California. Higher earthquake stress drops, coupled with lower attenuation of shaking with distance, result in stronger overall shaking over a wider area and thus more potential damage for CEUS earthquakes in comparison to those of equal magnitude in California - a fact also definitively captured with these new DYFI data and maps.
NASA Astrophysics Data System (ADS)
Allison, K. L.; Dunham, E. M.
2017-12-01
We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the effective viscosity. The resulting temperature changes may reduce the width of the shear zone in the lower crust and upper mantle, and reduce the effective viscosity.
Borcherdt, R.D.; Glassmoyer, Gary; Cranswick, Edward
1989-01-01
The earthquakes of December 7, 1988, near Spitak, Armenia SSR, serve as another grim reminder of the serious hazard that earthquakes pose throughout the world. We extend our heartfelt sympathies to the families of the earthquake victims and intend that our cooperative scientific endeavours will help reduce losses in future earthquakes. Only through a better understanding of earthquake hazards can earthquake losses be reduced for all peoples in seismically active regions of the world.The tragic consequences of these earthquakes remind scientists and public officials alike of their urgent responsibilities to understand and mitigate the effects of earthquakes. On behalf of the U.S. Geological Survey, I would like to express appreciation to our Soviet colleagues for their kind invitation to participate in joint scientific and engineering studies. Without their cooperation and generous assistance, the conduct of these studies would not have been possible.This report provides seismologic and geologic data collected during the time period December 21, 1988, through February 2, 1989. These data are presented in their entirety to expedite analysis of the data set for inferences regarding hazard mitigation actions, applicable not only in Armenia but other regions of the world exposed to high seismic risk
ERIC Educational Resources Information Center
Selin, Helaine
1993-01-01
Describes scientific and technical accomplishments of the Chinese in developing earthquake detection procedures, paper making, and medicine and of Islamic people in developing astronomy and mechanical engineering. (PR)
NASA Astrophysics Data System (ADS)
Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes
2010-05-01
Earthquakes are among the most horrible events of nature due to unexpected occurrence, for which no spiritual means are available for protection. The only way of preserving life and property is applying earthquake-resistant construction methods. Ancient Greek architects of public buildings applied steel clamps embedded in lead casing to hold together columns and masonry walls during frequent earthquakes in the Aegean region. Elastic steel provided strength, while plastic lead casing absorbed minor shifts of blocks without fracturing rigid stone. Romans invented concrete and built all sizes of buildings as a single, unflexible unit. Masonry surrounding and decorating concrete core of the wall did not bear load. Concrete resisted minor shaking, yielding only to forces higher than fracture limits. Roman building traditions survived the Dark Ages and 12th century Crusader castles erected in earthquake-prone Syria survive until today in reasonably good condition. Concrete and steel clamping persisted side-by-side in the Roman Empire. Concrete was used for cheap construction as compared to building of masonry. Applying lead-encased steel increased costs, and was avoided whenever possible. Columns of the various forums in Italian Pompeii mostly lack steel fittings despite situated in well-known earthquake-prone area. Whether frequent recurrence of earthquakes in the Naples region was known to inhabitants of Pompeii might be a matter of debate. Seemingly the shock of the AD 62 earthquake was not enough to apply well-known protective engineering methods throughout the reconstruction of the city before the AD 79 volcanic catastrophe. An independent engineering tradition developed on the island of Java (Indonesia). The mortar-less construction technique of 8-9th century Hindu masonry shrines around Yogyakarta would allow scattering of blocks during earthquakes. To prevent dilapidation an intricate mortise-and-tenon system was carved into adjacent faces of blocks. Only the outermost layer was treated this way, the core of the shrines was made of simple rectangular blocks. The system resisted both in-plane and out-of-plane shaking quite well, as proven by survival of many shrines for more than a millennium, and by fracturing of blocks instead of displacement during the 2006 Yogyakarta earthquake. Systematic use or disuse of known earthquake-resistant techniques in any one society depends on the perception of earthquake risk and on available financial resources. Earthquake-resistant construction practice is significantly more expensive than regular construction. Perception is influenced mostly by short individual and longer social memory. If earthquake recurrence time is longer than the preservation of social memory, if damaging quakes fade into the past, societies commit the same construction mistakes again and again. Length of the memory is possibly about a generation's lifetime. Events occurring less frequently than 25-30 years can be readily forgotten, and the risk of recurrence considered as negligible, not worth the costs of safe construction practices. (Example of recurring flash floods in Hungary.) Frequent earthquakes maintain safe construction practices, like the Java masonry technique throughout at least two centuries, and like the Fachwerk tradition on Modern Aegean Samos throughout 500 years of political and technological development. (OTKA K67583)
NASA Astrophysics Data System (ADS)
Jiao, L.; Chan, C. H.; Tapponnier, P.
2017-12-01
The role of seamounts in generating earthquakes has been debated, with some studies suggesting that seamounts could be truncated to generate megathrust events, while other studies indicate that the maximum size of megathrust earthquakes could be reduced as subducting seamounts could lead to segmentation. The debate is highly relevant for the seamounts discovered along the Mentawai patch of the Sunda Trench, where previous studies have suggested that a megathrust earthquake will likely occur within decades. In order to model the dynamic behavior of the Mentawai patch, we simulated forearc faulting caused by seamount subducting using the Discrete Element Method. Our models show that rupture behavior in the subduction system is dominated by stiffness of the overriding plate. When stiffness is low, a seamount can be a barrier to rupture propagation, resulting in several smaller (M≤8.0) events. If, however, stiffness is high, a seamount can cause a megathrust earthquake (M8 class). In addition, we show that a splay fault in the subduction environment could only develop when a seamount is present, and a larger offset along a splay fault is expected when stiffness of the overriding plate is higher. Our dynamic models are not only consistent with previous findings from seismic profiles and earthquake activities, but the models also better constrain the rupture behavior of the Mentawai patch, thus contributing to subsequent seismic hazard assessment.
Graves, R.W.; Aagaard, Brad T.; Hudnut, K.W.; Star, L.M.; Stewart, J.P.; Jordan, T.H.
2008-01-01
Using the high-performance computing resources of the Southern California Earthquake Center, we simulate broadband (0-10 Hz) ground motions for three Mw 7.8 rupture scenarios of the southern San Andreas fault. The scenarios incorporate a kinematic rupture description with the average rupture speed along the large slip portions of the fault set at 0.96, 0.89, and 0.84 times the local shear wave velocity. Consistent with previous simulations, a southern hypocenter efficiently channels energy into the Los Angeles region along the string of basins south of the San Gabriel Mountains. However, we find the basin ground motion levels are quite sensitive to the prescribed rupture speed, with peak ground velocities at some sites varying by over a factor of two for variations in average rupture speed of about 15%. These results have important implications for estimating seismic hazards in Southern California and emphasize the need for improved understanding of earthquake rupture processes. Copyright 2008 by the American Geophysical Union.
Science & Technology Review September 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radousky, H B
2006-07-18
This month's article has the following articles: (1) Simulations Help Plan for Large Earthquakes--Commentary by Jane C. S. Long; (2) Re-creating the 1906 San Francisco Earthquake--Supercomputer simulations of Bay Area earthquakes are providing insight into the great 1906 quake and future temblors along several faults; (3) Decoding the Origin of a Bioagent--The microstructure of a bacterial organism can be linked to the methods used to formulate the pathogen; (4) A New Look at How Aging Bones Fracture--Livermore scientists find that the increased risk of fracture from osteoporosis may be due to a change in the physical structure of trabecular bone;more » and (5) Fusion Targets on the Double--Advances in precision manufacturing allow the production of double-shell fusion targets with submicrometer tolerances.« less
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.
2014-12-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of up-lift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge up-lift, we ran tsunami simulations with the slip reactivation model using sam(oa)2 (O. Meister et al., 2012), a state-of-the-art Finite-Volume framework to simulate the resulting tsunami waves.
NASA Astrophysics Data System (ADS)
Harada, Tomoya; Satake, Kenji; Furumura, Takashi
2017-04-01
We carried out tsunami numerical simulations in the western Pacific Ocean and East China Sea in order to examine the behavior of massive tsunami outside Japan from the hypothetical M 9 tsunami source models along the Nankai Trough proposed by the Cabinet Office of Japanese government (2012). The distribution of MTHs (maximum tsunami heights for 24 h after the earthquakes) on the east coast of China, the east coast of the Philippine Islands, and north coast of the New Guinea Island show peaks with approximately 1.0-1.7 m,4.0-7.0 m,4.0-5.0 m, respectively. They are significantly higher than that from the 1707 Ho'ei earthquake (M 8.7), the largest earthquake along the Nankai trough in recent Japanese history. Moreover, the MTH distributions vary with the location of the huge slip(s) in the tsunami source models although the three coasts are far from the Nankai trough. Huge slip(s) in the Nankai segment mainly contributes to the MTHs, while huge slip(s) or splay faulting in the Tokai segment hardly affects the MTHs. The tsunami source model was developed for responding to the unexpected occurrence of the 2011 Tohoku Earthquake, with 11 models along the Nanakai trough, and simulated MTHs along the Pacific coasts of the western Japan from these models exceed 10 m, with a maximum height of 34.4 m. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with the Corioli's force and bottom friction (Satake, 1995) in the area of 115-155 ° E and 8° S-40° N. Because water depth of the East China Sea is shallower than 200 m, the tsunami propagation is likely to be affected by the ocean bottom fiction. The 30 arc-seconds gridded bathymetry data provided by the General Bathymetric Chart of the Oceans (GEBCO-2014) are used. For long propagation of tsunami we simulated tsunamis for 24 hours after the earthquakes. This study was supported by the"New disaster mitigation research project on Mega thrust earthquakes around Nankai/Ryukyu subduction zones", a project of Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT).
NASA Astrophysics Data System (ADS)
Blank, D. G.; Morgan, J.
2017-12-01
Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.
NASA Astrophysics Data System (ADS)
Shen, W. H.; Luo, Y.; Jiao, Q. S.
2018-04-01
On August 8, 2017, an earthquake of M 7.0 occurred at Jiuzhaigou. Based on the Sentinel-1 satellite InSAR data, we obtained coseismic deformation field and inverted the source slip model. Results show that this event is dominated by strike slip, and the total released seismic moment is 8.06 × 1018 Nm, equivalent to an earthquake of Mw 6.57. We calculated static stress changes along strike and dip direction, and the static stress analysis show that the average stress drop are at low level, which may be responsible for the low level of ground motion during Jiuzhaigou earthquake. The coseismic Coulomb stress changes are calculated base on the inverted slip model, which revealed that 82.59 % of aftershocks are located in the Coulomb stress increasing area, 78.42 % of total aftershocks may be triggered by the mainshock aftershock, indicating that the mainshock has a significant triggering effect on the subsequent aftershocks. Based on stochastic finite fault model, we simulated regional peak ground acceleration (PGA), peak ground velocity (PGV) and the intensity, and results could capture basic features associated with the ground motion patterns. Moreover, the simulated results reflect the obvious rupture directivity effect.
High Attenuation Rate for Shallow, Small Earthquakes in Japan
NASA Astrophysics Data System (ADS)
Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe
2017-09-01
We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.
Topographical and geological amplification: case studies and engineering implications
Celebi, M.
1991-01-01
Topographical and geological amplification that occurred during past earthquakes are quantified using spectral ratios of recorded motions. Several cases are presented from the 1985 Chilean and Mexican earthquakes as well as the 1983 Coalinga (California) and 1987 Supersition Hills (California) earthquake. The strong motions recorded in Mexico City during the 1985 Michoacan earthquake are supplemented by ambient motions recorded within Mexico City to quantify the now well known resonating frequencies of the Mexico City lakebed. Topographical amplification in Canal Beagle (Chile), Coalinga and Superstition Hills (California) are quantified using the ratios derived from the aftershocks following the earthquakes. A special dense array was deployed to record the aftershocks in each case. The implications of both geological and topographical amplification are discussed in light of current code provisions. The observed geological amplifications has already influenced the code provisions. Suggestions are made to the effect that the codes should include further provisions to take the amplification due to topography into account. ?? 1991.
Haeussler, Peter J.; Schwartz, D.P.; Dawson, T.E.; Stenner, Heidi D.; Lienkaemper, J.J.; Cinti, F.; Montone, Paola; Sherrod, B.; Craw, P.
2004-01-01
On 3 November 2002, an M7.9 earthquake produced 340 km of surface rupture on the Denali and two related faults in Alaska. The rupture proceeded from west to east and began with a 40-km-long break on a previously unknown thrust fault. Estimates of surface slip on this thrust are 3-6 m. Next came the principal surface break along ???218 km of the Denali fault. Right-lateral offsets averaged around 5 m and increased eastward to a maximum of nearly 9 m. The fault also ruptured beneath the trans-Alaska oil pipeline, which withstood almost 6 m of lateral offset. Finally, slip turned southeastward onto the Totschunda fault. Right-lateral offsets are up to 3 m, and the surface rupture is about 76 km long. This three-part rupture ranks among the longest strike-slip events of the past two centuries. The earthquake is typical when compared to other large earthquakes on major intracontinental strike-slip faults. ?? 2004, Earthquake Engineering Research Institute.
Why is Probabilistic Seismic Hazard Analysis (PSHA) still used?
NASA Astrophysics Data System (ADS)
Mulargia, Francesco; Stark, Philip B.; Geller, Robert J.
2017-03-01
Even though it has never been validated by objective testing, Probabilistic Seismic Hazard Analysis (PSHA) has been widely used for almost 50 years by governments and industry in applications with lives and property hanging in the balance, such as deciding safety criteria for nuclear power plants, making official national hazard maps, developing building code requirements, and determining earthquake insurance rates. PSHA rests on assumptions now known to conflict with earthquake physics; many damaging earthquakes, including the 1988 Spitak, Armenia, event and the 2011 Tohoku, Japan, event, have occurred in regions relatively rated low-risk by PSHA hazard maps. No extant method, including PSHA, produces reliable estimates of seismic hazard. Earthquake hazard mitigation should be recognized to be inherently political, involving a tradeoff between uncertain costs and uncertain risks. Earthquake scientists, engineers, and risk managers can make important contributions to the hard problem of allocating limited resources wisely, but government officials and stakeholders must take responsibility for the risks of accidents due to natural events that exceed the adopted safety criteria.
Saving lives through better design standards
Çelebi, Mehmet; Spudich, Paul A.; Page, Robert A.; Stauffer, Peter H.
1995-01-01
Over the past 30 years, scientists have put together a more complete picture of how the ground shakes during earthquakes. They have learned that shaking near the source of earthquakes is far more severe than once thought and that soft ground shakes more strongly than hard rock.This knowledge has enabled engineers to improve design standards so that structures arebetter able to survive strong earthquakes. When the 1989 Loma Prieta earthquake struck, 42 people tragically lost their lives in the collapse of a half-mile-long section of the Cypress structure, an elevated double-decker freeway in Oakland, California.Yet adjacent parts of this structure withstood the magnitude 6.9 temblor—why? The part that collapsed was built on man-made fill over soft mud, whereas adjacent sections stood on older, firmer sand and gravel deposits. Following the collapse, scientists set out instruments in the area to record the earthquake's many strong aftershocks. These instruments showed that the softer ground shook more forcefully than the firmer material-even twice as violently
NASA Astrophysics Data System (ADS)
Fang, Yi; Huang, Yahong
2017-12-01
Conducting sand liquefaction estimated based on codes is the important content of the geotechnical design. However, the result, sometimes, fails to conform to the practical earthquake damages. Based on the damage of Tangshan earthquake and engineering geological conditions, three typical sites are chosen. Moreover, the sand liquefaction probability was evaluated on the three sites by using the method in the Code for Seismic Design of Buildings and the results were compared with the sand liquefaction phenomenon in the earthquake. The result shows that the difference between sand liquefaction estimated based on codes and the practical earthquake damage is mainly attributed to the following two aspects: The primary reasons include disparity between seismic fortification intensity and practical seismic oscillation, changes of groundwater level, thickness of overlying non-liquefied soil layer, local site effect and personal error. Meanwhile, although the judgment methods in the codes exhibit certain universality, they are another reason causing the above difference due to the limitation of basic data and the qualitative anomaly of the judgment formulas.
Ji, C.; Helmberger, D.V.; Wald, D.J.
2004-01-01
Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.
A Tsunami Model for Chile for (Re) Insurance Purposes
NASA Astrophysics Data System (ADS)
Arango, Cristina; Rara, Vaclav; Puncochar, Petr; Trendafiloski, Goran; Ewing, Chris; Podlaha, Adam; Vatvani, Deepak; van Ormondt, Maarten; Chandler, Adrian
2014-05-01
Catastrophe models help (re)insurers to understand the financial implications of catastrophic events such as earthquakes and tsunamis. In earthquake-prone regions such as Chile,(re)insurers need more sophisticated tools to quantify the risks facing their businesses, including models with the ability to estimate secondary losses. The 2010 (M8.8) Maule (Chile) earthquake highlighted the need for quantifying losses from secondary perils such as tsunamis, which can contribute to the overall event losses but are not often modelled. This paper presents some key modelling aspects of a new earthquake catastrophe model for Chile developed by Impact Forecasting in collaboration with Aon Benfield Research partners, focusing on the tsunami component. The model has the capability to model tsunami as a secondary peril - losses due to earthquake (ground-shaking) and induced tsunamis along the Chilean coast are quantified in a probabilistic manner, and also for historical scenarios. The model is implemented in the IF catastrophe modelling platform, ELEMENTS. The probabilistic modelling of earthquake-induced tsunamis uses a stochastic event set that is consistent with the seismic (ground shaking) hazard developed for Chile, representing simulations of earthquake occurrence patterns for the region. Criteria for selecting tsunamigenic events (from the stochastic event set) are proposed which take into consideration earthquake location, depth and the resulting seabed vertical displacement and tsunami inundation depths at the coast. The source modelling software RuptGen by Babeyko (2007) was used to calculate static seabed vertical displacement resulting from earthquake slip. More than 3,600 events were selected for tsunami simulations. Deep and shallow water wave propagation is modelled using the Delft3D modelling suite, which is a state-of-the-art software developed by Deltares. The Delft3D-FLOW module is used in 2-dimensional hydrodynamic simulation settings with non-steady flow. Earthquake-induced static seabed vertical displacement is used as an input boundary condition to the model. The model is hierarchically set up with three nested domain levels; with 250 domains in total covering the entire Chilean coast. Spatial grid-cell resolution is equal to the native SRTM resolution of approximately 90m. In addition to the stochastic events, the 1960 (M9.5) Valdivia and 2010 (M8.8) Maule earthquakes are modelled. The modelled tsunami inundation map for the 2010 Maule event is validated through comparison with real observations. The vulnerability component consists of an extensive damage curves database, including curves for buildings, contents and business interruption for 21 occupancies, 24 structural types and two secondary modifies such as building height and period of construction. The building damage curves are developed by use of load-based method in which the building's capacity to resist tsunami loads is treated as equivalent to the design earthquake load capacity. The contents damage and business interruption curves are developed by use of deductive approach i.e. HAZUS flood vulnerability and business function restoration models are adapted for detailed occupancies and then assigned to the dominant structural types in Chile. The vulnerability component is validated through model overall back testing by use of observed aggregated earthquake and tsunami losses for client portfolios for 2010 Maule earthquake.
Lee, William H K.
2016-01-01
Rotational seismology is an emerging study of all aspects of rotational motions induced by earthquakes, explosions, and ambient vibrations. It is of interest to several disciplines, including seismology, earthquake engineering, geodesy, and earth-based detection of Einstein’s gravitation waves.Rotational effects of seismic waves, together with rotations caused by soil–structure interaction, have been observed for centuries (e.g., rotated chimneys, monuments, and tombstones). Figure 1a shows the rotated monument to George Inglis observed after the 1897 Great Shillong earthquake. This monument had the form of an obelisk rising over 19 metres high from a 4 metre base. During the earthquake, the top part broke off and the remnant of some 6 metres rotated about 15° relative to the base. The study of rotational seismology began only recently when sensitive rotational sensors became available due to advances in aeronautical and astronomical instrumentations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinozuka, M.; Rose, A.; Eguchi, R.T.
1998-12-31
This monograph examines the potential effects of a repeat of the New Madrid earthquake to the metropolitan Memphis area. The authors developed a case study of the impact of such an event to the electric power system, and analyzed how this disruption would affect society. In nine chapters and 189 pages, the book traces the impacts of catastrophic earthquakes through a curtailment of utility lifeline services to its host regional economy and beyond. the monographs` chapters include: Modeling the Memphis economy; seismic performance of electric power systems; spatial analysis techniques for linking physical damage to economic functions; earthquake vulnerability andmore » emergency preparedness among businesses; direct economic impacts; regional economic impacts; socioeconomic and interregional impacts; lifeline risk reduction; and public policy formulation and implementation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savich, A. I., E-mail: office@geodyn.ru; Burdina, N. A., E-mail: nina-burdina@mail.ru
Analysis of published data on the fundamental parameters of actual accelerograms of strong earthquakes having peak ground acceleration A{sub max}, predominant period T{sub pr}, and duration τ{sub 0.5} at 0.5A{sub max} determined that, for earthquakes of intensity greater than 6.5 – 7.0, the relationship between these quantities is sufficiently well described by the parameters B = ATτ and C = AτT{sup −1.338}, the former of which depends little on earthquake intensity I and is almost completely determined by the earthquake magnitude, while the latter, on the contrary, weakly depends on magnitude and is determined principally by the quantity I. Methodsmore » are proposed for using the parameters B and C to improve the reliability of determining parameters of accelerograms used to calculate the seismic resistance of hydraulic engineering facilities.« less
Towards to Resilience Science -Research on the Nankai trough seismogenic zone-
NASA Astrophysics Data System (ADS)
Kaneda, Yoshiyuki; Shiraki, Wataru; Fujisawa, Kazuhito; Tokozakura, Eiji
2017-04-01
For the last few decades, many destructive earthquakes and tsunamis occurred in the world. Based on lessons learnt from 2004 Sumatra Earthquake/Tsunamis, 2010 Chilean Earthquake/Tsunami and 2011 East Japan Earthquake/Tsunami, we recognized the importance of real time monitoring on Earthquakes and Tsunamis for disaster mitigation. Recently, Kumamoto Earthquake occurred in 2006. This destructive Earthquake indicated that multi strong motions including pre shock and main shock generated severe earthquake damages buildings. Furthermore, we recognize recovers/ revivals are very important and difficult. In Tohoku area damaged by large tsunamis, recovers/revivals have been under progressing after over 5 years passed after the 2011 Tohoku Earthquake. Therefore, we have to prepare the pre plan before next destructive disasters such as the Nankai trough mega thrust earthquake. As one of disaster countermeasures, we would like to propose that Disaster Mitigation Science. This disaster mitigation science is including engineering, science, medicine and social science such as sociology, informatics, law, literature, art, psychology etc. For Urgent evacuations, there are some kinds of real time monitoring system such as Dart buoy and ocean floor network. Especially, the real time monitoring system using multi kinds of sensors such as the accelerometer, broadband seismometer, pressure gauge, difference pressure gauge, hydrophone and thermometer is indispensable for Earthquakes/ Tsunamis monitoring. Furthermore, using multi kind of sensors, we can analyze and estimate broadband crustal activities around mega thrust earthquake seismogenic zones. Therefore, we deployed DONET1 and DONET2 which are dense ocean floor networks around the Nankai trough Southwestern Japan. We will explain about Resilience Science and real time monitoring systems around the Nankai trough seismogenic zone.
A Virtual Tour of the 1868 Hayward Earthquake in Google EarthTM
NASA Astrophysics Data System (ADS)
Lackey, H. G.; Blair, J. L.; Boatwright, J.; Brocher, T.
2007-12-01
The 1868 Hayward earthquake has been overshadowed by the subsequent 1906 San Francisco earthquake that destroyed much of San Francisco. Nonetheless, a modern recurrence of the 1868 earthquake would cause widespread damage to the densely populated Bay Area, particularly in the east Bay communities that have grown up virtually on top of the Hayward fault. Our concern is heightened by paleoseismic studies suggesting that the recurrence interval for the past five earthquakes on the southern Hayward fault is 140 to 170 years. Our objective is to build an educational web site that illustrates the cause and effect of the 1868 earthquake drawing on scientific and historic information. We will use Google EarthTM software to visually illustrate complex scientific concepts in a way that is understandable to a non-scientific audience. This web site will lead the viewer from a regional summary of the plate tectonics and faulting system of western North America, to more specific information about the 1868 Hayward earthquake itself. Text and Google EarthTM layers will include modeled shaking of the earthquake, relocations of historic photographs, reconstruction of damaged buildings as 3-D models, and additional scientific data that may come from the many scientific studies conducted for the 140th anniversary of the event. Earthquake engineering concerns will be stressed, including population density, vulnerable infrastructure, and lifelines. We will also present detailed maps of the Hayward fault, measurements of fault creep, and geologic evidence of its recurrence. Understanding the science behind earthquake hazards is an important step in preparing for the next significant earthquake. We hope to communicate to the public and students of all ages, through visualizations, not only the cause and effect of the 1868 earthquake, but also modern seismic hazards of the San Francisco Bay region.
Knowledge base about earthquakes as a tool to minimize strong events consequences
NASA Astrophysics Data System (ADS)
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Alexander; Kijko, Andrzej
2017-04-01
The paper describes the structure and content of the knowledge base on physical and socio-economical consequences of damaging earthquakes, which may be used for calibration of near real-time loss assessment systems based on simulation models for shaking intensity, damage to buildings and casualties estimates. Such calibration allows to compensate some factors which influence on reliability of expected damage and loss assessment in "emergency" mode. The knowledge base contains the description of past earthquakes' consequences for the area under study. It also includes the current distribution of built environment and population at the time of event occurrence. Computer simulation of the recorded in knowledge base events allow to determine the sets of regional calibration coefficients, including rating of seismological surveys, peculiarities of shaking intensity attenuation and changes in building stock and population distribution, in order to provide minimum error of damaging earthquakes loss estimations in "emergency" mode. References 1. Larionov, V., Frolova, N: Peculiarities of seismic vulnerability estimations. In: Natural Hazards in Russia, volume 6: Natural Risks Assessment and Management, Publishing House "Kruk", Moscow, 120-131, 2003. 2. Frolova, N., Larionov, V., Bonnin, J.: Data Bases Used In Worlwide Systems For Earthquake Loss Estimation In Emergency Mode: Wenchuan Earthquake. In Proc. TIEMS2010 Conference, Beijing, China, 2010. 3. Frolova N. I., Larionov V. I., Bonnin J., Sushchev S. P., Ugarov A. N., Kozlov M. A. Loss Caused by Earthquakes: Rapid Estimates. Natural Hazards Journal of the International Society for the Prevention and Mitigation of Natural Hazards, vol.84, ISSN 0921-030, Nat Hazards DOI 10.1007/s11069-016-2653
FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.
Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.
1985-01-01
The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.
NASA Astrophysics Data System (ADS)
Fan, W.; Bassett, D.; Denolle, M.; Shearer, P. M.; Ji, C.; Jiang, J.
2017-12-01
The 2006 Mw 7.8 Java earthquake was a tsunami earthquake, exhibiting frequency-dependent seismic radiation along strike. High-frequency global back-projection results suggest two distinct rupture stages. The first stage lasted 65 s with a rupture speed of 1.2 km/s, while the second stage lasted from 65 to 150 s with a rupture speed of 2.7 km/s. In addition, P-wave high-frequency radiated energy and fall-off rates indicate a rupture transition at 60 s. High-frequency radiators resolved with back-projection during the second stage spatially correlate with splay fault traces mapped from residual free-air gravity anomalies. These splay faults also collocate with a major tsunami source associated with the earthquake inferred from tsunami first-crest back-propagation simulation. These correlations suggest that the splay faults may have been reactivated during the Java earthquake, as has been proposed for other tsunamigenic earthquakes, such as the 1944 Mw 8.1 Tonankai earthquake in the Nankai Trough.
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
NASA Astrophysics Data System (ADS)
Gischig, Valentin S.
2015-09-01
Earthquakes caused by fluid injection into deep underground reservoirs constitute an increasingly recognized risk to populations and infrastructure. Quantitative assessment of induced seismic hazard, however, requires estimating the maximum possible magnitude earthquake that may be induced during fluid injection. Here I seek constraints on an upper limit for the largest possible earthquake using source-physics simulations that consider rate-and-state friction and hydromechanical interaction along a straight homogeneous fault. Depending on the orientation of the pressurized fault in the ambient stress field, different rupture behaviors can occur: (1) uncontrolled rupture-front propagation beyond the pressure front or (2) rupture-front propagation arresting at the pressure front. In the first case, fault properties determine the earthquake magnitude, and the upper magnitude limit may be similar to natural earthquakes. In the second case, the maximum magnitude can be controlled by carefully designing and monitoring injection and thus restricting the pressurized fault area.
Assessing the Utility of and Improving USGS Earthquake Hazards Program Products
NASA Astrophysics Data System (ADS)
Gomberg, J. S.; Scott, M.; Weaver, C. S.; Sherrod, B. L.; Bailey, D.; Gibbons, D.
2010-12-01
A major focus of the USGS Earthquake Hazards Program (EHP) has been the development and implementation of products and information meant to improve earthquake hazard assessment, mitigation and response for a myriad of users. Many of these products rely on the data and efforts of the EHP and its partner scientists who are building the Advanced National Seismic System (ANSS). We report on a project meant to assess the utility of many of these products and information, conducted collaboratively by EHP scientists and Pierce County Department of Emergency Management staff. We have conducted focus group listening sessions with members of the engineering, business, medical, media, risk management, and emergency response communities as well as participated in the planning and implementation of earthquake exercises in the Pacific Northwest. Thus far we have learned that EHP and ANSS products satisfy many of the needs of engineers and some planners, and information is widely used by media and the general public. However, some important communities do not use these products despite their intended application for their purposes, particularly county and local emergency management and business communities. We have learned that products need to convey more clearly the impact of earthquakes, in everyday terms. Users also want products (e.g. maps, forecasts, etc.) that can be incorporated into tools and systems they use regularly. Rather than simply building products and posting them on websites, products need to be actively marketed and training provided. We suggest that engaging users prior to and during product development will enhance their usage and effectiveness.
Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity
NASA Astrophysics Data System (ADS)
Miah, Md Mamun
This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.
DOT National Transportation Integrated Search
1995-01-01
In order to obtain regional perspective on the major problems and issues to be addressed, a series of nine regional round tables were convened across the nation. One of these was held in Norfolk, VA, on June 11, 1993. The primary focus of this meetin...
Changes in Science Teachers' Conceptions and Connections of STEM Concepts and Earthquake Engineering
ERIC Educational Resources Information Center
Cavlazoglu, Baki; Stuessy, Carol
2017-01-01
The authors find justification for integrating science, technology, engineering, and mathematics (STEM) in the complex problems that today's students will face as tomorrow's STEM professionals. Teachers with individual subject-area specialties in the STEM content areas have limited experience in integrating STEM. In this study, the authors…
NASA Astrophysics Data System (ADS)
Noda, H.
2016-05-01
Pressure solution creep (PSC) is an important elementary process in rock friction at high temperatures where solubilities of rock-forming minerals are significantly large. It significantly changes the frictional resistance and enhances time-dependent strengthening. A recent microphysical model for PSC-involved friction of clay-quartz mixtures, which can explain a transition between dilatant and non-dilatant deformation (d-nd transition), was modified here and implemented in dynamic earthquake sequence simulations. The original model resulted in essentially a kind of rate- and state-dependent friction (RSF) law, but assumed a constant friction coefficient for clay resulting in zero instantaneous rate dependency in the dilatant regime. In this study, an instantaneous rate dependency for the clay friction coefficient was introduced, consistent with experiments, resulting in a friction law suitable for earthquake sequence simulations. In addition, a term for time-dependent strengthening due to PSC was added which makes the friction law logarithmically rate-weakening in the dilatant regime. The width of the zone in which clasts overlap or, equivalently, the interface porosity involved in PSC plays a role as the state variable. Such a concrete physical meaning of the state variable is a great advantage in future modelling studies incorporating other physical processes such as hydraulic effects. Earthquake sequence simulations with different pore pressure distributions demonstrated that excess pore pressure at depth causes deeper rupture propagation with smaller slip per event and a shorter recurrence interval. The simulated ruptures were arrested a few kilometres below the point of pre-seismic peak stress at the d-nd transition and did not propagate spontaneously into the region of pre-seismic non-dilatant deformation. PSC weakens the fault against slow deformation and thus such a region cannot produce a dynamic stress drop. Dynamic rupture propagation further down to brittle-plastic transition, evidenced by geological observations, would require even smaller frictional resistance at coseismic slip rate, suggesting the importance of implementation of dynamic weakening activated at coseismic slip rates for more realistic simulation of earthquake sequences. The present models produced much smaller afterslip at deeper parts of arrested ruptures than those with logarithmic RSF laws because of a more significant rate-strengthening effect due to linearly viscous PSC. Detailed investigation of afterslip would give a clue to understand the deformation mechanism which controls shear resistance of the fault in a region of arrest of earthquake ruptures.
NASA Astrophysics Data System (ADS)
Meng, L.; Shi, B.
2011-12-01
The New Zealand Earthquake of February 21, 2011, Mw 6.1 occurred in the South Island, New Zealand with the epicenter at longitude 172.70°E and latitude 43.58°S, and with depth of 5 km. The Mw 6.1 earthquake occurred on an unknown blind fault involving oblique-thrust faulting, which is 9 km away from southern of the Christchurch, the third largest city of New Zealand, with a striking direction from east toward west (United State Geology Survey, USGS, 2011). The earthquake killed at least 163 people and caused a lot of construction damages in Christchurch city. The Peak Ground Acceleration (PGA) observed at station Heathcote Valley Primary School (HVSC), which is 1 km away from the epicenter, is up to almost 2.0g. The ground-motion observation suggests that the buried earthquake source generates much higher near-fault ground motion. In this study, we have analyzed the earthquake source spectral parameters based on the strong motion observations, and estimated the near-fault ground motion based on the Brune's circular fault model. The results indicate that the larger ground motion may be caused by a higher dynamic stress drop,Δσd , or effect stress drop named by Brune, in the major source rupture region. In addition, a dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties from the kinematic point of view. For comparison purpose, we also conducted the broadband ground motion predictions for the station of HVSC; the synthetic seismogram of time histories produced for this station has good agreement with the observations in the waveforms, peak values and frequency contents, which clearly indicate that the higher dynamic stress drop during the fault rupture may play an important role to the anomalous ground-motion amplification. The preliminary simulated result illustrated in at Station HVSC is that the synthetics seismograms have a realistic appearance in the waveform and time duration to the observations, especially for the vertical component. Synthetics Fourier spectra are reasonably similar to the recordings. The simulated PGA values of vertical and S26W components are consistent with the recorded, and for the S64E component, the PGA derived from our simulation is smaller than that from observation. The resultant Fourier spectra both for the synthetic and observation is much similar with each other for three components of acceleration time histories, except for the vertical component, where the derived spectra from synthetic data is smaller than that resultant from observation when the frequency is above 10 Hz. Both theoretical study and numerical simulation indicate that, for the 2011 Mw 6.1, New Zealand Earthquake, the higher dynamic stress drop during the source rupture process could play an important role to the anomalous ground-motion amplification beside to the other site-related seismic effects. The composite source modeling based on the simple Brune's pulse model could approximately provide us a good insight into earthquake source related rupture processes for a moderate-sized earthquake.