An alternative approach for computing seismic response with accidental eccentricity
NASA Astrophysics Data System (ADS)
Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu
2014-09-01
Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.
NASA Astrophysics Data System (ADS)
Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2018-03-01
Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.
Seismic Analysis Capability in NASTRAN
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.
1984-01-01
Seismic analysis is a technique which pertains to loading described in terms of boundary accelerations. Earthquake shocks to buildings is the type of excitation which usually comes to mind when one hears the word seismic, but this technique also applied to a broad class of acceleration excitations which are applied at the base of a structure such as vibration shaker testing or shocks to machinery foundations. Four different solution paths are available in NASTRAN for seismic analysis. They are: Direct Seismic Frequency Response, Direct Seismic Transient Response, Modal Seismic Frequency Response, and Modal Seismic Transient Response. This capability, at present, is invoked not as separate rigid formats, but as pre-packaged ALTER packets to existing RIGID Formats 8, 9, 11, and 12. These ALTER packets are included with the delivery of the NASTRAN program and are stored on the computer as a library of callable utilities. The user calls one of these utilities and merges it into the Executive Control Section of the data deck to perform any of the four options are invoked by setting parameter values in the bulk data.
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.
2015-06-01
SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.
The Effect of Boiling on Seismic Properties of Water-Saturated Fractured Rock
NASA Astrophysics Data System (ADS)
Grab, Melchior; Quintal, Beatriz; Caspari, Eva; Deuber, Claudia; Maurer, Hansruedi; Greenhalgh, Stewart
2017-11-01
Seismic campaigns for exploring geothermal systems aim at detecting permeable formations in the subsurface and evaluating the energy state of the pore fluids. High-enthalpy geothermal resources are known to contain fluids ranging from liquid water up to liquid-vapor mixtures in regions where boiling occurs and, ultimately, to vapor-dominated fluids, for instance, if hot parts of the reservoir get depressurized during production. In this study, we implement the properties of single- and two-phase fluids into a numerical poroelastic model to compute frequency-dependent seismic velocities and attenuation factors of a fractured rock as a function of fluid state. Fluid properties are computed while considering that thermodynamic interaction between the fluid phases takes place. This leads to frequency-dependent fluid properties and fluid internal attenuation. As shown in a first example, if the fluid contains very small amounts of vapor, fluid internal attenuation is of similar magnitude as attenuation in fractured rock due to other mechanisms. In a second example, seismic properties of a fractured geothermal reservoir with spatially varying fluid properties are calculated. Using the resulting seismic properties as an input model, the seismic response of the reservoir is then computed while the hydrothermal structure is assumed to vary over time. The resulting seismograms demonstrate that anomalies in the seismic response due to fluid state variability are small compared to variations caused by geological background heterogeneity. However, the hydrothermal structure in the reservoir can be delineated from amplitude anomalies when the variations due to geology can be ruled out such as in time-lapse experiments.
DOT National Transportation Integrated Search
2010-02-01
This interdisciplinary project combined seismic data recorded at bridge sites with computer models to identify how highway bridges built on permanently and seasonally frozen ground behave during an earthquake. Two sites one in Anchorage and one in...
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria
2016-04-01
The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.
NASA Astrophysics Data System (ADS)
Poggi, V.; Burjanek, J.; Michel, C.; Fäh, D.
2017-08-01
The Swiss Seismological Service (SED) has recently finalised the installation of ten new seismological broadband stations in northern Switzerland. The project was led in cooperation with the National Cooperative for the Disposal of Radioactive Waste (Nagra) and Swissnuclear to monitor micro seismicity at potential locations of nuclear-waste repositories. To further improve the quality and usability of the seismic recordings, an extensive characterization of the sites surrounding the installation area was performed following a standardised investigation protocol. State-of-the-art geophysical techniques have been used, including advanced active and passive seismic methods. The results of all analyses converged to the definition of a set of best-representative 1-D velocity profiles for each site, which are the input for the computation of engineering soil proxies (traveltime averaged velocity and quarter-wavelength parameters) and numerical amplification models. Computed site response is then validated through comparison with empirical site amplification, which is currently available for any station connected to the Swiss seismic networks. With the goal of a high-sensitivity network, most of the NAGRA stations have been installed on stiff-soil sites of rather high seismic velocity. Seismic characterization of such sites has always been considered challenging, due to lack of relevant velocity contrast and the large wavelengths required to investigate the frequency range of engineering interest. We describe how ambient vibration techniques can successfully be applied in these particular conditions, providing practical recommendations for best practice in seismic site characterization of high-velocity sites.
Stockton, S.L.; Balch, Alfred H.
1978-01-01
The Salt Valley anticline, in the Paradox Basin of southeastern Utah, is under investigation for use as a location for storage of solid nuclear waste. Delineation of thin, nonsalt interbeds within the upper reaches of the salt body is extremely important because the nature and character of any such fluid- or gas-saturated horizons would be critical to the mode of emplacement of wastes into the structure. Analysis of 50 km of conventional seismic-reflection data, in the vicinity of the anticline, indicates that mapping of thin beds at shallow depths may well be possible using a specially designed adaptation of state-of-the-art seismic oil-exploration procedures. Computer ray-trace modeling of thin beds in salt reveals that the frequency and spatial resolution required to map the details of interbeds at shallow depths (less than 750 m) may be on the order of 500 Hz, with surface-spread lengths of less than 350 m. Consideration should be given to the burial of sources and receivers in order to attenuate surface noise and to record the desired high frequencies. Correlation of the seismic-reflection data with available well data and surface geology reveals the complex, structurally initiated diapir, whose upward flow was maintained by rapid contemporaneous deposition of continental clastic sediments on its flanks. Severe collapse faulting near the crests of these structures has distorted the seismic response. Evidence exists, however, that intrasalt thin beds of anhydrite, dolomite, and black shale are mappable on seismic record sections either as short, discontinuous reflected events or as amplitude anomalies that result from focusing of the reflected seismic energy by the thin beds; computer modeling of the folded interbeds confirms both of these as possible causes of seismic response from within the salt diapir. Prediction of the seismic signatures of the interbeds can be made from computer-model studies. Petroleum seismic-reflection data are unsatisfactory for mapping the thin beds because of the lack of sufficient resolution to provide direct evidence of the presence of the thin beds. However, indirect evidence, present in these data as discontinuous seismic events, suggests that two geophysical techniques designed for this specific problem would allow direct detection of the interbeds in salt. These techniques are vertical seismic profiling and shallow, short-offset, high-frequency, seismic-reflection recording.
High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers
NASA Astrophysics Data System (ADS)
Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.
2017-12-01
The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.
High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas
2017-04-01
Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise mapping application is composed of four principal modules: (1) pre-processing of raw data, (2) massive cross-correlation, (3) post-processing of correlation data based on computation of logarithmic energy ratio and (4) generation of source maps from post-processed data. Implementation of the solution posed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.
NASA Astrophysics Data System (ADS)
Rovithis, Emmanouil; Kirtas, Emmanouil; Marini, Eleftheria; Bliziotis, Dimitris; Maltezos, Evangelos; Pitilakis, Dimitris; Makra, Konstantia; Savvaidis, Alexandros
2016-08-01
Airborne LiDAR monitoring integrated with field data is employed to assess the fundamental period and the seismic loading of structures composing an urban area under prescribed earthquake scenarios. Α piecewise work-flow is adopted by combining geometrical data of the building stock derived from a LiDAR-based 3D city model, structural data from in-situ inspections on representative city blocks and results of soil response analyses. The procedure is implemented in the residential area of Kalochori, (west of Thessaloniki in Northern Greece). Special attention is paid to the in-situ inspection of the building stock in order to discriminate recordings between actual buildings and man-made constructions that do not conform to seismic design codes and to acquire additional building stock data on structural materials, typologies and number of stories which is not feasible by the LiDAR process. The processed LiDAR and field data are employed to compute the fundamental period of each building by means of code-defined formulas. Knowledge of soil conditions in the Kalochoti area allows for soil response analyses to obtain free-field at ground surface under earthquake scenarios with varying return period. Upon combining the computed vibrational characteristics of the structures with the free-field response spectra, the seismic loading imposed on the structures of the urban area under investigation is derived for each one of the prescribed seismic motions. Results are presented in GIS environment in the form of spatially distributed spectral accelerations with direct implications in seismic vulnerability studies of an urban area.
NASA Astrophysics Data System (ADS)
Sudarmaji; Rudianto, Indra; Eka Nurcahya, Budi
2018-04-01
A strong tectonic earthquake with a magnitude of 5.9 Richter scale has been occurred in Yogyakarta and Central Java on May 26, 2006. The earthquake has caused severe damage in Yogyakarta and the southern part of Central Java, Indonesia. The understanding of seismic response of earthquake among ground shaking and the level of building damage is important. We present numerical modeling of 3D seismic wave propagation around Yogyakarta and the southern part of Central Java using spectral-element method on MPI-GPU (Graphics Processing Unit) computer cluster to observe its seismic response due to the earthquake. The homogeneous 3D realistic model is generated with detailed topography surface. The influences of free surface topography and layer discontinuity of the 3D model among the seismic response are observed. The seismic wave field is discretized using spectral-element method. The spectral-element method is solved on a mesh of hexahedral elements that is adapted to the free surface topography and the internal discontinuity of the model. To increase the data processing capabilities, the simulation is performed on a GPU cluster with implementation of MPI (Message Passing Interface).
A probabilistic seismic risk assessment procedure for nuclear power plants: (II) Application
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
This paper presents the procedures and results of intensity- and time-based seismic risk assessments of a sample nuclear power plant (NPP) to demonstrate the risk-assessment methodology proposed in its companion paper. The intensity-based assessments include three sets of sensitivity studies to identify the impact of the following factors on the seismic vulnerability of the sample NPP, namely: (1) the description of fragility curves for primary and secondary components of NPPs, (2) the number of simulations of NPP response required for risk assessment, and (3) the correlation in responses between NPP components. The time-based assessment is performed as a series of intensity-based assessments. The studies illustrate the utility of the response-based fragility curves and the inclusion of the correlation in the responses of NPP components directly in the risk computation. ?? 2011 Published by Elsevier B.V.
Monitoring the Earthquake source process in North America
Herrmann, Robert B.; Benz, H.; Ammon, C.J.
2011-01-01
With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.
Patton, John M.; Guy, Michelle R.; Benz, Harley M.; Buland, Raymond P.; Erickson, Brian K.; Kragness, David S.
2016-08-18
This report provides an overview of the capabilities and design of Hydra, the global seismic monitoring and analysis system used for earthquake response and catalog production at the U.S. Geological Survey National Earthquake Information Center (NEIC). Hydra supports the NEIC’s worldwide earthquake monitoring mission in areas such as seismic event detection, seismic data insertion and storage, seismic data processing and analysis, and seismic data output.The Hydra system automatically identifies seismic phase arrival times and detects the occurrence of earthquakes in near-real time. The system integrates and inserts parametric and waveform seismic data into discrete events in a database for analysis. Hydra computes seismic event parameters, including locations, multiple magnitudes, moment tensors, and depth estimates. Hydra supports the NEIC’s 24/7 analyst staff with a suite of seismic analysis graphical user interfaces.In addition to the NEIC’s monitoring needs, the system supports the processing of aftershock and temporary deployment data, and supports the NEIC’s quality assurance procedures. The Hydra system continues to be developed to expand its seismic analysis and monitoring capabilities.
Zhang, Yang; Toksöz, M Nafi
2012-08-01
The seismic response of saturated porous rocks is studied numerically using microtomographic images of three-dimensional digitized Berea sandstones. A stress-strain calculation is employed to compute the velocities and attenuations of rock samples whose sizes are much smaller than the seismic wavelength of interest. To compensate for the contributions of small cracks lost in the imaging process to the total velocity and attenuation, a hybrid method is developed to recover the crack distribution, in which the differential effective medium theory, the Kuster-Toksöz model, and a modified squirt-flow model are utilized in a two-step Monte Carlo inversion. In the inversion, the velocities of P- and S-waves measured for the dry and water-saturated cases, and the measured attenuation of P-waves for different fluids are used. By using such a hybrid method, both the velocities of saturated porous rocks and the attenuations are predicted accurately when compared to laboratory data. The hybrid method is a practical way to model numerically the seismic properties of saturated porous rocks until very high resolution digital data are available. Cracks lost in the imaging process are critical for accurately predicting velocities and attenuations of saturated porous rocks.
Predicting the seismic performance of typical R/C healthcare facilities: emphasis on hospitals
NASA Astrophysics Data System (ADS)
Bilgin, Huseyin; Frangu, Idlir
2017-09-01
Reinforced concrete (RC) type of buildings constitutes an important part of the current building stock in earthquake prone countries such as Albania. Seismic response of structures during a severe earthquake plays a vital role in the extent of structural damage and resulting injuries and losses. In this context, this study evaluates the expected performance of a five-story RC healthcare facility, representative of common practice in Albania, designed according to older codes. The design was based on the code requirements used in this region during the mid-1980s. Non-linear static and dynamic time history analyses were conducted on the structural model using the Zeus NL computer program. The dynamic time history analysis was conducted with a set of ground motions from real earthquakes. The building responses were estimated in global levels. FEMA 356 criteria were used to predict the seismic performance of the building. The structural response measures such as capacity curve and inter-story drift under the set of ground motions and pushover analyses results were compared and detailed seismic performance assessment was done. The main aim of this study is considering the application and methodology for the earthquake performance assessment of existing buildings. The seismic performance of the structural model varied significantly under different ground motions. Results indicate that case study building exhibit inadequate seismic performance under different seismic excitations. In addition, reasons for the poor performance of the building is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parra, J.; Collier, H.; Angstman, B.
In low porosity, low permeability zones, natural fractures are the primary source of permeability which affect both production and injection of fluids. The open fractures do not contribute much to porosity, but they provide an increased drainage network to any porosity. An important approach to characterizing the fracture orientation and fracture permeability of reservoir formations is one based upon the effects of such conditions on the propagation of acoustic and seismic waves in the rock. We present the feasibility of using seismic measurement techniques to map the fracture zones between wells spaced 2400 ft at depths of about 1000 ft.more » For this purpose we constructed computer models (which include azimuthal anisotropy) using Lodgepole reservoir parameters to predict seismic signatures recorded at the borehole scale, crosswell scale, and 3 D seismic scale. We have integrated well logs with existing 2D surfaces seismic to produce petrophysical and geological cross sections to determine the reservoir parameters and geometry for the computer models. In particular, the model responses are used to evaluate if surface seismic and crosswell seismic measurements can capture the anisotropy due to vertical fractures. Preliminary results suggested that seismic waves transmitted between two wells will propagate in carbonate fracture reservoirs, and the signal can be received above the noise level at the distance of 2400 ft. In addition, the large velocities contrast between the main fracture zone and the underlying unfractured Boundary Ridge Member, suggested that borehole reflection imaging may be appropriate to map and fracture zone thickness variation and fracture distributions in the reservoir.« less
Seismic and Restoration Assessment of Monumental Masonry Structures
Asteris, Panagiotis G.; Douvika, Maria G.; Apostolopoulou, Maria; Moropoulou, Antonia
2017-01-01
Masonry structures are complex systems that require detailed knowledge and information regarding their response under seismic excitations. Appropriate modelling of a masonry structure is a prerequisite for a reliable earthquake-resistant design and/or assessment. However, modelling a real structure with a robust quantitative (mathematical) representation is a very difficult, complex and computationally-demanding task. The paper herein presents a new stochastic computational framework for earthquake-resistant design of masonry structural systems. The proposed framework is based on the probabilistic behavior of crucial parameters, such as material strength and seismic characteristics, and utilizes fragility analysis based on different failure criteria for the masonry material. The application of the proposed methodology is illustrated in the case of a historical and monumental masonry structure, namely the assessment of the seismic vulnerability of the Kaisariani Monastery, a byzantine church that was built in Athens, Greece, at the end of the 11th to the beginning of the 12th century. Useful conclusions are drawn regarding the effectiveness of the intervention techniques used for the reduction of the vulnerability of the case-study structure, by means of comparison of the results obtained. PMID:28767073
Seismic and Restoration Assessment of Monumental Masonry Structures.
Asteris, Panagiotis G; Douvika, Maria G; Apostolopoulou, Maria; Moropoulou, Antonia
2017-08-02
Masonry structures are complex systems that require detailed knowledge and information regarding their response under seismic excitations. Appropriate modelling of a masonry structure is a prerequisite for a reliable earthquake-resistant design and/or assessment. However, modelling a real structure with a robust quantitative (mathematical) representation is a very difficult, complex and computationally-demanding task. The paper herein presents a new stochastic computational framework for earthquake-resistant design of masonry structural systems. The proposed framework is based on the probabilistic behavior of crucial parameters, such as material strength and seismic characteristics, and utilizes fragility analysis based on different failure criteria for the masonry material. The application of the proposed methodology is illustrated in the case of a historical and monumental masonry structure, namely the assessment of the seismic vulnerability of the Kaisariani Monastery, a byzantine church that was built in Athens, Greece, at the end of the 11th to the beginning of the 12th century. Useful conclusions are drawn regarding the effectiveness of the intervention techniques used for the reduction of the vulnerability of the case-study structure, by means of comparison of the results obtained.
The Quake Catcher Network: Cyberinfrastructure Bringing Seismology into Schools and Homes
NASA Astrophysics Data System (ADS)
Lawrence, J. F.; Cochran, E. S.
2007-12-01
We propose to implement a high density, low cost strong-motion network for rapid response and early warning by placing sensors in schools, homes, and offices. The Quake Catcher Network (QCN) will employ existing networked laptops and desktops to form the world's largest high-density, distributed computing seismic network. Costs for this network will be minimal because the QCN will use 1) strong motion sensors (accelerometers) already internal to many laptops and 2) nearly identical low-cost universal serial bus (USB) accelerometers for use with desktops. The Berkeley Open Infrastructure for Network Computing (BOINC!) provides a free, proven paradigm for involving the public in large-scale computational research projects. As evidenced by the SETI@home program and others, individuals are especially willing to donate their unused computing power to projects that they deem relevant, worthwhile, and educational. The client- and server-side software will rapidly monitor incoming seismic signals, detect the magnitudes and locations of significant earthquakes, and may even provide early warnings to other computers and users before they can feel the earthquake. The software will provide the client-user with a screen-saver displaying seismic data recorded on their laptop, recently detected earthquakes, and general information about earthquakes and the geosciences. Furthermore, this project will install USB sensors in K-12 classrooms as an educational tool for teaching science. Through a variety of interactive experiments students will learn about earthquakes and the hazards earthquakes pose. For example, students can learn how the vibrations of an earthquake decrease with distance by jumping up and down at increasing distances from the sensor and plotting the decreased amplitude of the seismic signal measured on their computer. We hope to include an audio component so that students can hear and better understand the difference between low and high frequency seismic signals. The QCN will provide a natural way to engage students and the public in earthquake detection and research.
NASA Astrophysics Data System (ADS)
Liu, Xiwu; Guo, Zhiqi; Han, Xu
2018-06-01
A set of parallel vertical fractures embedded in a vertically transverse isotropy (VTI) background leads to orthorhombic anisotropy and corresponding azimuthal seismic responses. We conducted seismic modeling of full waveform amplitude variations versus azimuth (AVAZ) responses of anisotropic shale by integrating a rock physics model and a reflectivity method. The results indicate that the azimuthal variation of P-wave velocity tends to be more complicated for orthorhombic medium compared to the horizontally transverse isotropy (HTI) case, especially at high polar angles. Correspondingly, for the HTI layer in the theoretical model, the short axis of the azimuthal PP amplitudes at the top interface is parallel to the fracture strike, while the long axis at the bottom reflection directs the fracture strike. In contrast, the orthorhombic layer in the theoretical model shows distinct AVAZ responses in terms of PP reflections. Nevertheless, the azimuthal signatures of the R- and T-components of the mode-converted PS reflections show similar AVAZ features for the HTI and orthorhombic layers, which may imply that the PS responses are dominated by fractures. For the application to real data, a seismic-well tie based on upscaled data and a reflectivity method illustrate good agreement between the reference layers and the corresponding reflected events. Finally, the full waveform seismic AVAZ responses of the Longmaxi shale formation are computed for the cases of HTI and orthorhombic anisotropy for comparison. For the two cases, the azimuthal features represent differences mainly in amplitudes, while slightly in the phases of the reflected waveforms. Azimuth variations in the PP reflections from the reference layers show distinct behaviors for the HTI and orthorhombic cases, while the mode-converted PS reflections in terms of the R- and T-components show little differences in azimuthal features. It may suggest that the behaviors of the PS waves are dominated by vertically aligned fractures. This work provides further insight into the azimuthal seismic response of orthorhombic shales. The proposed method may help to improve the seismic-well tie, seismic interpretation, and inversion results using an azimuth anisotropy dataset.
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
Seismic modeling of complex stratified reservoirs
NASA Astrophysics Data System (ADS)
Lai, Hung-Liang
Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex vertical and lateral heterogeneity that is difficult to measure directly, stochastic modeling is often used to predict the range of possible seismic responses. Though binary models containing mixtures of sands and shales have been proposed in previous work, log measurements show that these are not good representations of real seismic properties. Therefore, I develop a new approach for generating stochastic turbidite models (STM) from a combination of geological interpretation and well log measurements that are more realistic. Calculations of the composite reflection coefficient and synthetic seismograms predict direct hydrocarbon indicators associated with such turbidite sequences. The STMs provide important insights to predict the seismic responses for the complexity of turbidite reservoirs. Results of AVO responses predict the presence of gas saturation in the sand beds. For example, as the source frequency increases, the uncertainty in AVO responses for brine and gas sands predict the possibility of false interpretation in AVO analysis.
Salton Trough Post-seismic Afterslip, Viscoelastic Response, and Contribution to Regional Hazard
NASA Astrophysics Data System (ADS)
Parker, J. W.; Donnellan, A.; Lyzenga, G. A.
2012-12-01
The El Mayor-Cucapah M7.2 April 4 2010 earthquake in Baja California may have affected accumulated hazard to Southern California cities due to loading of regional faults including the Elsinore, San Jacinto and southern San Andreas, faults which already have over a century of tectonic loading. We examine changes observed via multiple seismic and geodetic techniques, including micro seismicity and proposed seismicity-based indicators of hazard, high-quality fault models, the Plate Boundary Observatory GNSS array (with 174 stations showing post-seismic transients with greater than 1 mm amplitude), and interferometric radar maps from UAVSAR (aircraft) flights, showing a network of aseismic fault slip events at distances up to 60 km from the end of the surface rupture. Finite element modeling is used to compute the expected coseismic motions at GPS stations with general agreement, including coseismic uplift at sites ~200 km north of the rupture. Postseismic response is also compared, with GNSS and also with the CIG software "RELAX." An initial examination of hazard is made comparing micro seismicity-based metrics, fault models, and changes to coulomb stress on nearby faults using the finite element model. Comparison of seismicity with interferograms and historic earthquakes show aseismic slip occurs on fault segments that have had earthquakes in the last 70 years, while other segments show no slip at the surface but do show high triggered seismicity. UAVSAR-based estimates of fault slip can be incorporated into the finite element model to correct Coloumb stress change.
NASA Astrophysics Data System (ADS)
Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.
2015-12-01
Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.
NASA Astrophysics Data System (ADS)
Kohler, M. D.; Castillo, J.; Massari, A.; Clayton, R. W.
2017-12-01
Earthquake-induced motions recorded by spatially dense seismic arrays in buildings located in the northern Los Angeles basin suggest the presence of complex, amplified surface wave effects on the seismic demand of mid-rise buildings. Several moderate earthquakes produced large-amplitude, seismic energy with slow shear-wave velocities that cannot be explained or accurately modeled by any published 3D seismic velocity models or by Vs30 values. Numerical experiments are conducted to determine if sedimentary basin features are responsible for these rarely modeled and poorly documented contributions to seismic demand computations. This is accomplished through a physics-based wave propagation examination of the effects of different sedimentary basin geometries on the nonlinear response of a mid-rise structural model based on an existing, instrumented building. Using two-dimensional finite-difference predictive modeling, we show that when an earthquake focal depth is near the vertical edge of an elongated and relatively shallow sedimentary basin, dramatically amplified and complex surface waves are generated as a result of the waveguide effect introduced by this velocity structure. In addition, for certain source-receiver distances and basin geometries, body waves convert to secondary Rayleigh waves that propagate both at the free-surface interface and along the depth interface of the basin that show up as multiple large-amplitude arrivals. This study is motivated by observations from the spatially dense, high-sample-rate acceleration data recorded by the Community Seismic Network, a community-hosted strong-motion network, currently consisting of hundreds of sensors located in the southern California area. The results provide quantitative insight into the causative relationship between a sedimentary basin shape and the generation of Rayleigh waves at depth, surface waves at the free surface, scattered seismic energy, and the sensitivity of building responses to each of these.
NASA Astrophysics Data System (ADS)
Poursartip, B.
2015-12-01
Seismic hazard assessment to predict the behavior of infrastructures subjected to earthquake relies on ground motion numerical simulation because the analytical solution of seismic waves is limited to only a few simple geometries. Recent advances in numerical methods and computer architectures make it ever more practical to reliably and quickly obtain the near-surface response to seismic events. The key motivation stems from the need to access the performance of sensitive components of the civil infrastructure (nuclear power plants, bridges, lifelines, etc), when subjected to realistic scenarios of seismic events. We discuss an integrated approach that deploys best-practice tools for simulating seismic events in arbitrarily heterogeneous formations, while also accounting for topography. Specifically, we describe an explicit forward wave solver based on a hybrid formulation that couples a single-field formulation for the computational domain with an unsplit mixed-field formulation for Perfectly-Matched-Layers (PMLs and/or M-PMLs) used to limit the computational domain. Due to the material heterogeneity and the contrasting discretization needs it imposes, an adaptive time solver is adopted. We use a Runge-Kutta-Fehlberg time-marching scheme that adjusts optimally the time step such that the local truncation error rests below a predefined tolerance. We use spectral elements for spatial discretization, and the Domain Reduction Method in accordance with double couple method to allow for the efficient prescription of the input seismic motion. Of particular interest to this development is the study of the effects idealized topographic features have on the surface motion when compared against motion results that are based on a flat-surface assumption. We discuss the components of the integrated approach we followed, and report the results of parametric studies in two and three dimensions, for various idealized topographic features, which show motion amplification that depends, as expected, on the relation between the topographic feature's characteristics and the dominant wavelength. Lastly, we report results involving three-dimensional simulations.
Community Seismic Network (CSN)
NASA Astrophysics Data System (ADS)
Clayton, R. W.; Heaton, T. H.; Kohler, M. D.; Chandy, M.; Krause, A.
2010-12-01
In collaboration with computer science and earthquake engineering, we are developing a dense network of low-cost accelerometers that send their data via the Internet to a cloud-based center. The goal is to make block-by-block measurements of ground shaking in urban areas, which will provide emergency response information in the case of large earthquakes, and an unprecedented high-frequency seismic array to study structure and the earthquake process with moderate shaking. When deployed in high-rise buildings they can be used to monitor the state of health of the structure. The sensors are capable of a resolution of approximately 80 micro-g, connect via USB ports to desktop computers, and cost about $100 each. The network will adapt to its environment by using network-wide machine learning to adjust the picking sensitivity. We are also looking into using other motion sensing devices such as cell phones. For a pilot project, we plan to deploy more than 1000 sensors in the greater Pasadena area. The system is easily adaptable to other seismically vulnerable urban areas.
Forecasting induced seismicity rate and Mmax using calibrated numerical models
NASA Astrophysics Data System (ADS)
Dempsey, D.; Suckale, J.
2016-12-01
At Groningen, The Netherlands, several decades of induced seismicity from gas extraction has culminated in a M 3.6 event (mid 2012). From a public safety and commercial perspective, it is desirable to anticipate future seismicity outcomes at Groningen. One way to quantify earthquake risk is Probabilistic Seismic Hazard Analysis (PSHA), which requires an estimate of the future seismicity rate and its magnitude frequency distribution (MFD). This approach is effective at quantifying risk from tectonic events because the seismicity rate, once measured, is almost constant over timescales of interest. In contrast, rates of induced seismicity vary significantly over building lifetimes, largely in response to changes in injection or extraction. Thus, the key to extending PSHA to induced earthquakes is to estimate future changes of the seismicity rate in response to some proposed operating schedule. Numerical models can describe the physical link between fluid pressure, effective stress change, and the earthquake process (triggering and propagation). However, models with predictive potential of individual earthquakes face the difficulty of characterizing specific heterogeneity - stress, strength, roughness, etc. - at locations of interest. Modeling catalogs of earthquakes provides a means of averaging over this uncertainty, focusing instead on the collective features of the seismicity, e.g., its rate and MFD. The model we use incorporates fluid pressure and stress changes to describe nucleation and crack-like propagation of earthquakes on stochastically characterized 1D faults. This enables simulation of synthetic catalogs of induced seismicity from which the seismicity rate, location and MFD are extracted. A probability distribution for Mmax - the largest event in some specified time window - is also computed. Because the model captures the physics linking seismicity to changes in the reservoir, earthquake observations and operating information can be used to calibrate a model at a specific site (or, ideally, many models). This restricts analysis of future seismicity to likely parameter sets and provides physical justification for linking operational changes to subsequent seismicity. To illustrate these concepts, a recent study of prior and forecast seismicity at Groningen will be presented.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.
2006-02-01
We here describe one way of constructing internal multiples from surface seismic data only. The key feature of our construct of internal multiples is the introduction of the concept of virtual seismic events. Virtual events here are events, which are not directly recorded in standard seismic data acquisition, but their existence allows us to construct internal multiples with scattering points at the sea surface; the standard construct of internal multiples does not include any scattering points at the sea surface. The mathematical and computational operations invoked in our construction of virtual events and internal multiples are similar to those encountered in the construction of free-surface multiples based on the Kirchhoff or Born scattering theory. For instance, our construct operates on one temporal frequency at a time, just like free-surface demultiple algorithms; other internal multiple constructs tend to require all frequencies for the computation of an internal multiple at a given frequency. It does not require any knowledge of the subsurface nor an explicit knowledge of specific interfaces that are responsible for the generation of internal multiples in seismic data. However, our construct requires that the data be divided into two, three or four windows to avoid generating primaries. This segmentation of the data also allows us to select a range of periods of internal multiples that one wishes to construct because, in the context of the attenuation of internal multiples, it is important to avoid generating short-period internal multiples that may constructively average to form primaries at the seismic scale.
NASA Astrophysics Data System (ADS)
Savage, M. K.; Heckels, R.; Townend, J.
2015-12-01
Quantifying seismic velocity changes following large earthquakes can provide insights into the crustal response of the earth. The use of ambient seismic noise to monitor these changes is becoming increasingly widespread. Cross-correlations of long-duration ambient noise records can be used to give stable impulse response functions without the need for repeated seismic events. Temporal velocity changes were detected in the four months following the September 2010 Mw 7.1 Darfield event in South Island, New Zealand, using temporary seismic networks originally deployed to record aftershocks in the region. The arrays consisted of stations lying on and surrounding the fault, with a maximum inter-station distance of 156km. The 2010-2011 Canterbury earthquake sequence occurred largely on previously unknown and buried faults. The Darfield earthquake was the first and largest in a sequence of events that hit the region, rupturing the Greendale Fault. A surface rupture of nearly 30km was observed. The sequence also included the Mw 6.3 February 2011 Christchurch event, which caused widespread damage throughout the city and resulted in almost 200 deaths. Nine-component, day-long Green's functions were computed for frequencies between 0.1 - 1.0 Hz for full waveform seismic data from immediately after the 4th September 2010 earthquake until mid-January 2011. Using the moving window cross-spectral method, stacks of daily functions covering the study period (reference functions), were compared to consecutive 10 day stacks of cross-correlations to measure time delays between them. These were then inverted for seismic velocity changes with respect to the reference functions. Over the study period an increase in seismic velocity of 0.25% ± 0.02% was determined proximal to the Greendale fault. These results are similar to studies in other regions, and we attribute the changes to post-seismic relaxation through crack-healing of the Greendale Fault and throughout the region.
Effectiveness of damped braces to mitigate seismic torsional response of unsymmetric-plan buildings
NASA Astrophysics Data System (ADS)
Mazza, Fabio; Pedace, Emilia; Favero, Francesco Del
2017-02-01
The seismic retrofitting of unsymmetric-plan reinforced concrete (r.c.) framed buildings can be carried out by the incorporation of damped braces (DBs). Yet most of the proposals to mitigate the seismic response of asymmetric framed buildings by DBs rest on the hypothesis of elastic (linear) structural response. The aim of the present work is to evaluate the effectiveness and reliability of a Displacement-Based Design procedure of hysteretic damped braces (HYDBs) based on the nonlinear behavior of the frame members, which adopts the extended N2 method considered by Eurocode 8 to evaluate the higher mode torsional effects. The Town Hall of Spilinga (Italy), a framed structure with an L-shaped plan built at the beginning of the 1960s, is supposed to be retrofitted with HYDBs to attain performance levels imposed by the Italian seismic code (NTC08) in a high-risk zone. Ten structural solutions are compared by considering two in-plan distributions of the HYDBs, to eliminate (elastic) torsional effects, and different design values of the frame ductility combined with a constant design value of the damper ductility. A computer code for the nonlinear dynamic analysis of r.c. spatial framed structures is adopted to evaluate the critical incident angle of bidirectional earthquakes. Beams and columns are simulated with a lumped plasticity model, including flat surface modeling of the axial load-biaxial bending moment elastic domain at the end sections, while a bilinear law is used to idealize the behavior of the HYDBs. Damage index domains are adopted to estimate the directions of least seismic capacity, considering artificial earthquakes whose response spectra match those adopted by NTC08 at serviceability and ultimate limit states.
Neural Models: An Option to Estimate Seismic Parameters of Accelerograms
NASA Astrophysics Data System (ADS)
Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.
2014-12-01
Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
NASA Astrophysics Data System (ADS)
He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang
2016-07-01
The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.
NASA Astrophysics Data System (ADS)
Piana Agostinetti, Nicola; Giacomuzzi, Genny; Chiarabba, Claudio
2017-01-01
We present high-resolution elastic models and relocated seismicity of a very active segment of the Apennines normal faulting system, computed via transdimensional local earthquake tomography (trans-D LET). Trans-D LET, a fully nonlinear approach to seismic tomography, robustly constrains high-velocity anomalies and inversions of P wave velocity, i.e., decreases of VP with depth, without introducing bias due to, e.g., a starting model, and giving the possibility to investigate the relation between fault structure, seismicity, and fluids. Changes in seismicity rate and recurring seismic swarms are frequent in the Apennines extensional belt. Deep fluids, upwelling from the delaminating continental lithosphere, are thought to be responsible for seismicity clustering in the upper crust and lubrication of normal faults during swarms and large earthquakes. We focus on the tectonic role played by the Alto Tiberina low-angle normal fault (ATF), finding displacements across the fault consistent with long-term accommodation of deformation. Our results show that recent seismic swarms affecting the area occur within a 3 km thick, high VP/VS, densely cracked, and overpressurized evaporitic layer, composed of dolostones and anhydrites. A persistent low VP, low VP/VS volume, present on top of and along the ATF low-angle detachment, traces the location of mantle-derived CO2, the upward flux of which contributes to cracking within the evaporitic layer.
Impact induced response spectrum for the safety evaluation of the high flux isotope reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S.J.
1997-05-01
The dynamic impact to the nearby HFIR reactor vessel caused by heavy load drop is analyzed. The impact calculation is carried out by applying the ABAQUS computer code. An impact-induced response spectrum is constructed in order to evaluate whether the HFIR vessel and the shutdown mechanism may be disabled. For the frequency range less than 10 Hz, the maximum spectral velocity of impact is approximately equal to that of the HFIR seismic design-basis spectrum. For the frequency range greater than 10 Hz, the impact-induced response spectrum is shown to cause no effect to the control rod and the shutdown mechanism.more » An earlier seismic safety assessment for the HFIR control and shutdown mechanism was made by EQE. Based on EQE modal solution that is combined with the impact-induced spectrum, it is concluded that the impact will not cause any damage to the shutdown mechanism, even while the reactor is in operation. The present method suggests a general approach for evaluating the impact induced damage to the reactor by applying the existing finite element modal solution that has been carried out for the seismic evaluation of the reactor.« less
Coal-seismic, desktop computer programs in BASIC; Part 7, Display and compute shear-pair seismograms
Hasbrouck, W.P.
1983-01-01
Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report discusses and presents five computer pro grams used to display and compute shear-pair seismograms.
Seismic activity prediction using computational intelligence techniques in northern Pakistan
NASA Astrophysics Data System (ADS)
Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat
2017-10-01
Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.
Seismic risk management solution for nuclear power plants
Coleman, Justin; Sabharwall, Piyush
2014-12-01
Nuclear power plants should safely operate during normal operations and maintain core-cooling capabilities during off-normal events, including external hazards (such as flooding and earthquakes). Management of external hazards to expectable levels of risk is critical to maintaining nuclear facility and nuclear power plant safety. Seismic risk is determined by convolving the seismic hazard with seismic fragilities (capacity of systems, structures, and components). Seismic isolation (SI) is one protective measure showing promise to minimize seismic risk. Current SI designs (used in commercial industry) reduce horizontal earthquake loads and protect critical infrastructure from the potentially destructive effects of large earthquakes. The benefitmore » of SI application in the nuclear industry is being recognized and SI systems have been proposed in American Society of Civil Engineer Standard 4, ASCE-4, to be released in the winter of 2014, for light water reactors facilities using commercially available technology. The intent of ASCE-4 is to provide criteria for seismic analysis of safety related nuclear structures such that the responses to design basis seismic events, computed in accordance with this standard, will have a small likelihood of being exceeded. The U.S. nuclear industry has not implemented SI to date; a seismic isolation gap analysis meeting was convened on August 19, 2014, to determine progress on implementing SI in the U.S. nuclear industry. The meeting focused on the systems and components that could benefit from SI. As a result, this article highlights the gaps identified at this meeting.« less
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
NASA Astrophysics Data System (ADS)
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
Real Time Earthquake Information System in Japan
NASA Astrophysics Data System (ADS)
Doi, K.; Kato, T.
2003-12-01
An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally monitors earthquake data and analyzes earthquake activities and tsunami occurrence round-the-clock on a real-time basis. In addition to the above, JMA has been developing a system of Nowcast Earthquake Information which can provide its users with occurrence of an earthquake prior to arrival of strong ground motion for a decade. Earthquake Research Institute, the University of Tokyo, is preparing a demonstrative experiment in collaboration with JMA, for a better utilization of Nowcast Earthquake Information to apply actual measures to reduce earthquake disasters caused by strong ground motion.
Goal-seismic computer programs in BASIC: Part I; Store, plot, and edit array data
Hasbrouck, Wilfred P.
1979-01-01
Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in an extended BASIC language specially augmented for acceptance by the Tektronix 4051 Graphic System. This report presents five computer programs used to store, plot, and edit array data for the line, cross, and triangle arrays commonly employed in our coal-seismic investigations. * Use of brand names in this report is for descriptive purposes only and does not constitute endorsement by the U.S. Geological Survey.
Impediments to predicting site response: Seismic property estimation and modeling simplifications
Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Guzina, B.B.
2009-01-01
We compare estimates of the empirical transfer function (ETF) to the plane SH-wave theoretical transfer function (TTF) within a laterally constant medium for invasive and noninvasive estimates of the seismic shear-wave slownesses at 13 Kiban-Kyoshin network stations throughout Japan. The difference between the ETF and either of the TTFs is substantially larger than the difference between the two TTFs computed from different estimates of the seismic properties. We show that the plane SH-wave TTF through a laterally homogeneous medium at vertical incidence inadequately models observed amplifications at most sites for both slowness estimates, obtained via downhole measurements and the spectral analysis of surface waves. Strategies to improve the predictions can be separated into two broad categories: improving the measurement of soil properties and improving the theory that maps the 1D soil profile onto spectral amplification. Using an example site where the 1D plane SH-wave formulation poorly predicts the ETF, we find a more satisfactory fit to the ETF by modeling the full wavefield and incorporating spatially correlated variability of the seismic properties. We conclude that our ability to model the observed site response transfer function is limited largely by the assumptions of the theoretical formulation rather than the uncertainty of the soil property estimates.
Using block pulse functions for seismic vibration semi-active control of structures with MR dampers
NASA Astrophysics Data System (ADS)
Rahimi Gendeshmin, Saeed; Davarnia, Daniel
2018-03-01
This article applied the idea of block pulse functions in the semi-active control of structures. The BP functions give effective tools to approximate complex problems. The applied control algorithm has a major effect on the performance of the controlled system and the requirements of the control devices. In control problems, it is important to devise an accurate analytical technique with less computational cost. It is proved that the BP functions are fundamental tools in approximation problems which have been applied in disparate areas in last decades. This study focuses on the employment of BP functions in control algorithm concerning reduction the computational cost. Magneto-rheological (MR) dampers are one of the well-known semi-active tools that can be used to control the response of civil Structures during earthquake. For validation purposes, numerical simulations of a 5-story shear building frame with MR dampers are presented. The results of suggested method were compared with results obtained by controlling the frame by the optimal control method based on linear quadratic regulator theory. It can be seen from simulation results that the suggested method can be helpful in reducing seismic structural responses. Besides, this method has acceptable accuracy and is in agreement with optimal control method with less computational costs.
Seismic isolation of nuclear power plants using sliding isolation bearings
NASA Astrophysics Data System (ADS)
Kumar, Manish
Nuclear power plants (NPP) are designed for earthquake shaking with very long return periods. Seismic isolation is a viable strategy to protect NPPs from extreme earthquake shaking because it filters a significant fraction of earthquake input energy. This study addresses the seismic isolation of NPPs using sliding bearings, with a focus on the single concave Friction Pendulum(TM) (FP) bearing. Friction at the sliding surface of an FP bearing changes continuously during an earthquake as a function of sliding velocity, axial pressure and temperature at the sliding surface. The temperature at the sliding surface, in turn, is a function of the histories of coefficient of friction, sliding velocity and axial pressure, and the travel path of the slider. A simple model to describe the complex interdependence of the coefficient of friction, axial pressure, sliding velocity and temperature at the sliding surface is proposed, and then verified and validated. Seismic hazard for a seismically isolated nuclear power plant is defined in the United States using a uniform hazard response spectrum (UHRS) at mean annual frequencies of exceedance (MAFE) of 10-4 and 10 -5. A key design parameter is the clearance to the hard stop (CHS), which is influenced substantially by the definition of the seismic hazard. Four alternate representations of seismic hazard are studied, which incorporate different variabilities and uncertainties. Response-history analyses performed on single FP-bearing isolation systems using ground motions consistent with the four representations at the two shaking levels indicate that the CHS is influenced primarily by whether the observed difference between the two horizontal components of ground motions in a given set is accounted for. The UHRS at the MAFE of 10-4 is increased by a design factor (≥ 1) for conventional (fixed base) nuclear structure to achieve a target annual frequency of unacceptable performance. Risk oriented calculations are performed for eight sites across the United States to show that the factor is equal to 1.0 for seismically isolated NPPs, if the risk is dominated by horizontal earthquake shaking. Response-history analyses using different models of seismically isolated NPPs are performed to understand the importance of the choice of friction model, model complexity and vertical ground motion for calculating horizontal displacement response across a wide range of sites and shaking intensities. A friction model for the single concave FP bearing should address heating. The pressure- and velocity-dependencies were not important for the models and sites studied. Isolation-system displacements can be computed using a macro model comprising a single FP bearing.
Seismic design parameters - A user guide
Leyendecker, E.V.; Frankel, A.D.; Rukstales, K.S.
2001-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings (1997 NEHRP Provisions) introduced seismic design procedure that is based on the explicit use of spectral response acceleration rather than the traditional peak ground acceleration and/or peak ground velocity or zone factors. The spectral response accelerations are obtained from spectral response acceleration maps accompanying the report. Maps are available for the United States and a number of U.S. territories. Since 1997 additional codes and standards have also adopted seismic design approaches based on the same procedure used in the NEHRP Provisions and the accompanying maps. The design documents using the 1997 NEHRP Provisions procedure may be divided into three categories -(1) Design of New Construction, (2) Design and Evaluation of Existing Construction, and (3) Design of Residential Construction. A CD-ROM has been prepared for use in conjunction with the design documents in each of these three categories. The spectral accelerations obtained using the software on the CD are the same as those that would be obtained by using the maps accompanying the design documents. The software has been prepared to operate on a personal computer using a Windows (Microsoft Corporation) operating environment and a point and click type of interface. The user can obtain the spectral acceleration values that would be obtained by use of the maps accompanying the design documents, include site factors appropriate for the Site Class provided by the user, calculate a response spectrum that includes the site factor, and plot a response spectrum. Sites may be located by providing the latitude-longitude or zip code for all areas covered by the maps. All of the maps used in the various documents are also included on the CDROM
NASA Astrophysics Data System (ADS)
Apostol, Bogdan Felix; Florin Balan, Stefan; Ionescu, Constantin
2017-12-01
The effects of the earthquakes on buildings and the concept of seismic base isolation are investigated by using the model of the vibrating bar embedded at one end. The normal modes and the eigenfrequencies of the bar are highlighted and the amplification of the response due to the excitation of the normal modes (eigenmodes) is computed. The effect is much enhanced at resonance, for oscillating shocks which contain eigenfrequencies of the bar. Also, the response of two linearly joined bars with one end embedded is calculated. It is shown that for very different elastic properties the eigenfrequencies are due mainly to the “softer” bar. The effect of the base isolation in seismic structural engineering is assessed by formulating the model of coupled harmonic oscillators, as a simplified model for the structure building-foundation viewed as two coupled vibrating bars. The coupling decreases the lower eigenfrequencies of the structure and increases the higher ones. Similar amplification factors are derived for coupled oscillators at resonance with an oscillating shock.
NASA Astrophysics Data System (ADS)
Maggi, C.; Frepoli, A.; Cimini, G. B.; Console, R.; Chiappini, M.
2009-01-01
We analyzed the instrumental seismicity of Southern Italy in the area including the Lucanian Apennines and Bradano foredeep, making use of the most recent seismological data base available so far. P- and S-wave arrival times, recorded by the Italian National Seismic Network (RSNC) operated by the Istituto Nazionale di Geofisica e Vulcanologia (INGV), were re-picked along with those of the SAPTEX temporary array deployed in the region in the period 2001-2004. For some events located in the upper Val d'Agri, we also used data from the Eni-Agip oil company seismic network. We examined the seismicity occurred during the period between 2001 and 2006, considering 514 events with magnitudes M ≥ 2.0. We computed the VP/ VS ratio obtaining a value of 1.83 and we carried out an analysis for the one-dimensional (1D) velocity model that approximates the seismic structure of the study area. Earthquakes were relocated and, for well- recorded events, we also computed 108 fault plane solutions. Finally, using 58 solutions, the most constrained, we computed regional stress field in the study area. Earthquake distribution shows three main seismic regions: the westernmost (Lucanian Apennines) characterized by high background seismicity, mostly with shallow hypocenters, the easternmost below the Bradano foredeep and the Murge with deeper and more scattered seismicity, and finally the more isolated and sparse seismicity localized in the Sila Range and in the offshore area along the northeastern Calabrian coast. Focal mechanisms computed in this work are in large part normal and strike-slip solutions and their tensional axes ( T-axes) have a generalized NE-SW orientation. The denser station coverage allowed us to improve hypocenters determination compared to those obtained by using only RSNC data, for a better characterization of the crustal and subcrustal seismicity in the study area.
Pseudospectral reverse time migration based on wavefield decomposition
NASA Astrophysics Data System (ADS)
Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang
2017-05-01
The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.
NASA Astrophysics Data System (ADS)
Leslie, A.; Gorman, A. R.
2004-12-01
The interpretation of seismic reflection data in non-sedimentary environments is problematic. In the Macraes Flat region near Dunedin (South Island, New Zealand), ongoing mining of mineralized schist has prompted the development of a seismic interpretation scheme that is capable of imaging a gold-bearing shear zone and associated mineralized structures accurately to the meter scale. The anisotropic and complex structural nature of this geological environment necessitates a cost-effective computer-based modeling technique that can provide information on the physical characteristics of the schist. Such a method has been tested on seismic data acquired in 1993 over a region that has since been excavated and logged. Correlation to measured structural data permits a direct comparison between the seismic data and the actual geology. Synthetic modeling utilizes a 2D visco-elastic finite difference routine to constrain the interpretation of observed seismic characteristics, including the velocity, anisotropy, and contrast, of the shear zone structures. Iterative refinements of the model result in a more representative synthetic model that most closely matches the seismic response. The comparison between the actual and synthetic seismic sections provides promising results that will be tested by new data acquisition over the summer of 2004/2005 to identify structures and zones of potential mineralization. As a downstream benefit, this research could also contribute to earthquake risk assessment analyses at active faults with similar characteristics.
A FORTRAN program for calculating nonlinear seismic ground response
Joyner, William B.
1977-01-01
The program described here was designed for calculating the nonlinear seismic response of a system of horizontal soil layers underlain by a semi-infinite elastic medium representing bedrock. Excitation is a vertically incident shear wave in the underlying medium. The nonlinear hysteretic behavior of the soil is represented by a model consisting of simple linear springs and Coulomb friction elements arranged as shown. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. A brief program description is provided here with instructions for preparing the input and a source listing. A more detailed discussion of the method is presented elsewhere as is the description of a different program employing implicit integration.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 2: Example of Calculation
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The second part of the article illustrates the use of the time analysis method (TAM) by the example of the calculation of a 3-storey building, the design dynamic model (DDM) of which is adopted in the form of a flat vertical cantilever rod with 3 horizontal degrees of freedom associated with floor and coverage levels. The parameters of natural oscillations (frequencies and modes) and the results of the calculation of the elastic forced oscillations of the building’s DDM - oscillograms of the reaction parameters on the time interval t ∈ [0; 131,25] sec. The obtained results are analyzed on the basis of the computed values of the discrepancy of the DDS motion equation and the comparison of the results calculated on the basis of the numerical approach (FEM) and the normative method set out in SP 14.13330.2014 “Construction in Seismic Regions”. The data of the analysis testify to the accuracy of the construction of the computational model as well as the high accuracy of the results obtained. In conclusion, it is revealed that the use of the TAM will improve the strength of buildings and structures subject to seismic influences when designing them.
Stability assessment of structures under earthquake hazard through GRID technology
NASA Astrophysics Data System (ADS)
Prieto Castrillo, F.; Boton Fernandez, M.
2009-04-01
This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.
NASA Astrophysics Data System (ADS)
Nowacki, A.; Walker, A. M.; Wookey, J.; Kendall, J.
2012-12-01
The core-mantle boundary (CMB) region is the site of the largest change in properties in the Earth. Moreover, the lowermost mantle above it (known as D″) shows the largest lateral variations in seismic velocity and strength of seismic anisotropy below the upper mantle. It is therefore vital to be able to accurately forward model candidate structures in the lowermost mantle with realistic sensitivity to structure and at the same frequencies at which observations are made. We use the spectral finite-element method to produce synthetic seismograms of ScS waves traversing a model of D″ anisotropy derived from mineralogical texture calculations and show that the seismic discontinuity atop the lowermost mantle varies in character laterally purely as a function of the strength and orientation of anisotropy. The lowermost mantle is widely anisotropic, shown by numerous shear wave splitting studies using waves of dominant frequency ~0.2-1 Hz. Whilst methods exist to model the finite-frequency seismic response of the lowermost mantle, most make the problem computationally efficient by imposing a certain symmetry to the problem, and of those which do not, almost none allow for completely general elasticity. Where low frequencies are simulated to reduce computational cost, it is uncertain whether waves of that frequency have comparable sensitivity to D″ structure as those observed at shorter periods. Currently, therefore, these computational limitations precludes the ability to interpret our observations fully. We present recent developments in taking a general approach to forward-modelling waves in D″. We use a modified version of SPECFEM3D_GLOBE, which uses the spectral finite-element method to model seismic wave propagation in a fully generally-elastic (i.e., 3D-varying, arbitrarily anisotropic) Earth. The calculations are computationally challenging: to approach the frequency of the observations, up to 10,000 processor cores and up to 2 TB of memory are needed. The synthetic seismograms can be directly compared to observations of shear wave splitting or other seismic phenomena and utilise all information from the waveform to accurately interpret D″ structures and elasticity. Using a recent model of mineralogical texture in the lowermost mantle (imposing no symmetry on the type on anisotropy), we model ScS waves traversing D″ in various regions. In this case, no lateral variations in average isotropic velocity exist, though the orientation and strength of anisotropy changes over a range of lengthscales (spherical harmonic degrees ≤128). We note a change in the amplitude (sometimes 0) and polarity (positive to negative) of arrivals which are reflected from the top of D″ (an arrival known as SdS) at ~300 km above the core-mantle boundary, even though no lateral variation exists between the isotropic overlying lower mantle and the anisotropic lowermost mantle. Supported by previous studies, this shows that changes only in anisotropy could be responsible for observed variations in SdS across the globe. Our approach can potentially be used to further model general elasticity at short wavelengths in any region in the Earth.
Method of migrating seismic records
Ober, Curtis C.; Romero, Louis A.; Ghiglia, Dennis C.
2000-01-01
The present invention provides a method of migrating seismic records that retains the information in the seismic records and allows migration with significant reductions in computing cost. The present invention comprises phase encoding seismic records and combining the encoded seismic records before migration. Phase encoding can minimize the effect of unwanted cross terms while still allowing significant reductions in the cost to migrate a number of seismic records.
NASA Astrophysics Data System (ADS)
Masson, Y. J.; Pride, S. R.
2007-03-01
Seismic attenuation and dispersion are numerically determined for computer-generated porous materials that contain arbitrary amounts of mesoscopic-scale heterogeneity in the porous continuum properties. The local equations used to determine the poroelastic response within such materials are those of Biot (1962). Upon applying a step change in stress to samples containing mesoscopic-scale heterogeneity, the poroelastic response is determined using finite difference modeling, and the average strain throughout the sample computed, along with the effective complex and frequency-dependent elastic moduli of the sample. The ratio of the imaginary and real parts of these moduli determines the attenuation as a function of frequency associated with the modes of applied stress (pure compression and pure shear). By having a wide range of heterogeneity present, there exists a wide range of relaxation frequencies in the response with the result that the curves of attenuation as a function of frequency are broader than in existing analytical theories based on a single relaxation frequency. Analytical explanations are given for the various high-frequency and low-frequency asymptotic behavior observed in the numerical simulations. It is also shown that the overall level of attenuation of a given sample is proportional to the square of the incompressibility contrasts locally present.
Forecasting volcanic unrest using seismicity: The good, the bad and the time consuming
NASA Astrophysics Data System (ADS)
Salvage, Rebecca; Neuberg, Jurgen W.
2013-04-01
Volcanic eruptions are inherently unpredictable in nature, with scientists struggling to forecast the type and timing of events, in particular in real time scenarios. Current understanding suggests that the use of statistical patterns within precursory datasets of seismicity prior to eruptive events could hold the potential to be used as real time forecasting tools. They allow us to determine times of clear deviation in data, which might be indicative of volcanic unrest. The identification of low frequency seismic swarms and the acceleration of this seismicity prior to observed volcanic unrest may be key in developing forecasting tools. The development of these real time forecasting models which can be implemented at volcano observatories is of particular importance since the identification of early warning signals allows danger to the proximal population to be minimized. We concentrate on understanding the significance and development of these seismic swarms as unrest develops at the volcano. In particular, analysis of accelerations in event rate, amplitude and energy rates released by seismicity prior to eruption suggests that these are important indicators of developing unrest. Real time analysis of these parameters simultaneously allows possible improvements to forecasting models. Although more time and computationally intense, cross correlation techniques applied to continuous seismicity prior to volcanic unrest scenarios allows all significant seismic events to be analysed, rather than only those which can be detected by an automated identification system. This may allow a more accurate forecast since all precursory seismicity can be taken into account. In addition, the classification of seismic events based on spectral characteristics may allow us to isolate individual types of signals which are responsible for certain types of unrest. In this way, we may be able to better forecast the type of eruption that may ensue, or at least some of its prevailing characteristics.
A Novel Approach to Constrain Near-Surface Seismic Wave Speed Based on Polarization Analysis
NASA Astrophysics Data System (ADS)
Park, S.; Ishii, M.
2016-12-01
Understanding the seismic responses of cities around the world is essential for the risk assessment of earthquake hazards. One of the important parameters is the elastic structure of the sites, in particular, near-surface seismic wave speed, that influences the level of ground shaking. Many methods have been developed to constrain the elastic structure of the populated sites or urban basins, and here, we introduce a new technique based on analyzing the polarization content or the three-dimensional particle motion of seismic phases arriving at the sites. Polarization analysis of three-component seismic data was widely used up to about two decades ago, to detect signals and identify different types of seismic arrivals. Today, we have good understanding of the expected polarization direction and ray parameter for seismic wave arrivals that are calculated based on a reference seismic model. The polarization of a given phase is also strongly sensitive to the elastic wave speed immediately beneath the station. This allows us to compare the observed and predicted polarization directions of incoming body waves and infer the near-surface wave speed. This approach is applied to High-Sensitivity Seismograph Network in Japan, where we benchmark the results against the well-log data that are available at most stations. There is a good agreement between our estimates of seismic wave speeds and those from well logs, confirming the efficacy of the new method. In most urban environments, where well logging is not a practical option for measuring the seismic wave speeds, this method can provide a reliable, non-invasive, and computationally inexpensive estimate of near-surface elastic properties.
NASA Astrophysics Data System (ADS)
Contreras Zazueta, M. A.; Perton, M.; Sanchez-Sesma, F. J.; Sánchez-Alvaro, E.
2013-12-01
The seismic hazard assessment of extended developments, such as a dam, a bridge or a pipeline, needs the strong ground motion simulation taking into account the effects of surface geology. In many cases the incoming wave field can be obtained from attenuation relations or simulations for layered media using Discrete Wave Number (DWN). Sometimes there is a need to include in simulations the seismic source as well. A number of methods to solve these problems have been developed. Among them the Finite Element and Finite Difference Methods (FEM and FDM) are generally preferred because of the facility of use. Nevertheless, the analysis of realistic dynamic loading induced by earthquakes requires a thinner mesh of the entire domain to consider high frequencies. Consequently this may imply a high computational cost. The Indirect Boundary Element Method (IBEM) can also be employed. Here it is used to study the response of a site to historical seismic activity. This method is particularly suited to model wave propagation through wide areas as it requires only the meshing of boundaries. Moreover, it is well suited to represent finely the diffraction that can occur on a fault. However, the IBEM has been applied mainly to simple geometrical configurations. In this communication significant refinements of the formulation are presented. Using IBEM we can simulate wave propagation in complex geometrical configurations such as a stratified medium crossed by thin faults or having a complex topography. Two main developments are here described; one integrates the DWN method inside the IBEM in order to represent the Green's functions of stratified media with relatively low computational cost but assuming unbounded parallel flat layers, and the other is the extension of IBEM to deal with multi-regions in contact which allows more versatility with a higher computational cost compared to the first one but still minor to an equivalent FEM formulation. The two approaches are fully described here and their results compared within the hazard studies of CFE-Las Cruces, Nayarit, Mexico, hydroelectrical project. ACKNOWLEDGEMENTS. This study is partially supported by DGAPA-UNAM under Project IN104712.
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
Modernization of the Slovenian National Seismic Network
NASA Astrophysics Data System (ADS)
Vidrih, R.; Godec, M.; Gosar, A.; Sincic, P.; Tasic, I.; Zivcic, M.
2003-04-01
The Environmental Agency of the Republic of Slovenia, the Seismology Office is responsible for the fast and reliable information about earthquakes, originating in the area of Slovenia and nearby. In the year 2000 the project Modernization of the Slovenian National Seismic Network started. The purpose of a modernized seismic network is to enable fast and accurate automatic location of earthquakes, to determine earthquake parameters and to collect data of local, regional and global earthquakes. The modernized network will be finished in the year 2004 and will consist of 25 Q730 remote broadband data loggers based seismic station subsystems transmitting in real-time data to the Data Center in Ljubljana, where the Seismology Office is located. The remote broadband station subsystems include 16 surface broadband seismometers CMG-40T, 5 broadband seismometers CMG-40T with strong motion accelerographs EpiSensor, 4 borehole broadband seismometers CMG-40T, all with accurate timing provided by GPS receivers. The seismic network will cover the entire Slovenian territory, involving an area of 20,256 km2. The network is planned in this way; more seismic stations will be around bigger urban centres and in regions with greater vulnerability (NW Slovenia, Krsko Brezice region). By the end of the year 2002, three old seismic stations were modernized and ten new seismic stations were built. All seismic stations transmit data to UNIX-based computers running Antelope system software. The data is transmitted in real time using TCP/IP protocols over the Goverment Wide Area Network . Real-time data is also exchanged with seismic networks in the neighbouring countries, where the data are collected from the seismic stations, close to the Slovenian border. A typical seismic station consists of the seismic shaft with the sensor and the data acquisition system and, the service shaft with communication equipment (modem, router) and power supply with a battery box. which provides energy in case of mains failure. The data acquisition systems are recording continuous time-series sampled at 200 sps, 20 sps and 1sps.
Seismic instrumentation plan for the Hawaiian Volcano Observatory
Thelen, Weston A.
2014-01-01
The installation of new seismic stations is only the first part of building a volcanic early warning capability for seismicity in the State of Hawaii. Additional personnel will likely be required to study the volcanic processes at work under each volcano, analyze the current seismic activity at a level sufficient for early warning, build new tools for monitoring, maintain seismic computing resources, and maintain the new seismic stations.
Seismic hazard study for selected sites in New Mexico and Nevada
NASA Astrophysics Data System (ADS)
Johnston, J. C.
1983-12-01
Seismic hazard evaluations were conducted for specific sites in New Mexico and Nevada. For New Mexico, a model of seismicity was developed from historical accounts of medium to large shocks and the current microactivity record from local networks. Ninety percent confidence levels at Albuquerque and Roswell were computed to be 56 gals for a 10-year period and 77 gals for a 20-year period. Values of ground motion for Clovis were below these values. Peak velocity and displacement were also computed for each site. Deterministic spectra based on the estimated maximum credible earthquake for the zones which the sites occupy were also computed. For the sites in Nevada, the regionalizations used in Battis (1982) for the uniform seismicity model were slightly modified. For 10- and 20-year time periods, peak acceleration values for Indian Springs were computed to be 94 gals and 123 gals and for Hawthorne 206 gals and 268 gals. Deterministic spectra were also computed. The input parameters were well determined for the analysis for the Nevada sites because of the abundance of data. The values computed for New Mexico, however, are likely upper limits. As more data are collected from the area of the Rio Grande rift zone, the pattern of seismicity will become better understood. At this time a more detailed, and thus more accurate, model may emerge.
Methods to enhance seismic faults and construct fault surfaces
NASA Astrophysics Data System (ADS)
Wu, Xinming; Zhu, Zhihui
2017-10-01
Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.
Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.
1983-01-01
The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.
Post-seismic relaxation theory on laterally heterogeneous viscoelastic model
Pollitz, F.F.
2003-01-01
Investigation was carried out into the problem of relaxation of a laterally heterogeneous viscoelastic Earth following an impulsive moment release event. The formal solution utilizes a semi-analytic solution for post-seismic deformation on a laterally homogeneous Earth constructed from viscoelastic normal modes, followed by application of mode coupling theory to derive the response on the aspherical Earth. The solution is constructed in the Laplace transform domain using the correspondence principle and is valid for any linear constitutive relationship between stress and strain. The specific implementation described in this paper is a semi-analytic discretization method which assumes isotropic elastic structure and a Maxwell constitutive relation. It accounts for viscoelastic-gravitational coupling under lateral variations in elastic parameters and viscosity. For a given viscoelastic structure and minimum wavelength scale, the computational effort involved with the numerical algorithm is proportional to the volume of the laterally heterogeneous region. Examples are presented of the calculation of post-seismic relaxation with a shallow, laterally heterogeneous volume following synthetic impulsive seismic events, and they illustrate the potentially large effect of regional 3-D heterogeneities on regional deformation patterns.
Validation Data and Model Development for Fuel Assembly Response to Seismic Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardet, Philippe; Ricciardi, Guillaume
2016-01-31
Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less
Revision of IRIS/IDA Seismic Station Metadata
NASA Astrophysics Data System (ADS)
Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.
2017-12-01
Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
NASA Astrophysics Data System (ADS)
Viens, L.; Denolle, M.; Hirata, N.
2017-12-01
Strong ground motion can induce dynamic strains large enough for the shallow subsurface to respond non-linearly and cause permanent velocity changes during earthquakes. We investigate the behavior of the near-surface in the Tokyo metropolitan area during the 2011 Mw 9.0 Tohoku-Oki earthquake using continuous records from 234 seismometers of the Metropolitan Seismic Observation network (MeSO-net). This network, which was deployed in shallow 20-m depth boreholes, recorded horizontal accelerations up to 236 cm/s2 during the mainshock. For each MeSO-net station, we compute the near-surface response using the single-station cross-correlation technique between vertical and horizontal components, every 6 hours for 2.5 months around the main event. Comparing each near-surface response against the pre-event reference, we find seismic velocity drops up to 10% in the near-surface of the Tokyo metropolitan area during the mainshock. The amplitude of the coseismic velocity drop increases with increasing ground shaking and decreasing VS30, which is the S-wave velocity the first 30-m of the ground. Furthermore, the waveforms experience a loss of coherence that recovers exponentially over a time. This recovery rate also increases with the acceleration levels. While most of the velocity changes and waveform coherence recover within a few days, we also find permanent changes at stations that experienced liquefaction and the strongest ground motions. The ambient seismic field captures the coseismic velocity changes in the shallow structure and the following healing process, and may be used to detect permanent damage.
On-line Data Transmission, as Part of the Seismic Evaluation Process in the Buildings Field
NASA Astrophysics Data System (ADS)
Sorin Dragomir, Claudiu; Dobre, Daniela; Craifaleanu, Iolanda; Georgescu, Emil-Sever
2017-12-01
The thorough analytical modelling of seismic actions, of the structural system and of the foundation soil is essential for a proper dynamic analysis of a building. However, the validation of the used models should be made, whenever possible, with reference to results obtained from experimental investigations, building instrumentation and monitoring of vibrations generated by various seismic or non-seismic sources. In Romania, the permanent seismic instrumentation/monitoring of buildings is part of a special follow-up activity, performed in accordance with the P130/1999 code for the time monitoring of building behaviour and with the seismic design code, P100-2013. By using the state-of-the-art modern equipment (GeoSIG and Kinemetrics digital accelerographs) in the seismic network of the National Institute for Research and Development URBAN-INCERC, the instrumented buildings can be monitored remotely, with recorded data being sent to authorities or to research institutes in the field by a real-time data transmission system. The obtained records are processed, computing the Fourier amplitude spectra and the response spectra, and the modal parameters of buildings are determined. The paper presents some of the most important results of the institute in the field of building monitoring, focusing on the situation of some significant instrumented buildings located in different parts of the country. In addition, maps with data received from seismic stations after the occurrence of two recent Vrancea (Romania) earthquakes, showing the spatial distribution of ground accelerations, are presented, together with a comparative analysis, performed with reference to previous studies in the literature.
NASA Astrophysics Data System (ADS)
Ji, Kun; Bouaanani, Najib; Wen, Ruizhi; Ren, Yefei
2018-05-01
This paper aims at implementing and introducing the use of conditional mean spectrum (CMS) and conditional spectrum (CS) as the main input parameters in the practice of seismic safety evaluation (SSE) in China, instead of the currently used uniform hazard spectrum (UHS). For this purpose, a procedure for M-R-epsilon seismic hazard deaggregation in China was first developed. For illustration purposes, two different typical sites in China, with one to two dominant seismic zones, were considered as examples to carry out seismic hazard deaggregation and illustrate the construction of CMS/CS. Two types of correlation coefficients were used to generate CMS and the results were compared over a vibration period range of interest. Ground motion records were selected from the NSMONS (2007-2015) and PEER NGA-West2 databases to correspond to the target CMS and CS. Hazard consistency of the spectral accelerations of the selected ground motion records was evaluated and validated by computing the annual exceedance probability rate of the response spectra and comparing the results to the hazard curve corresponding to each site of concern at different periods. The tools developed in this work and their illustrative application to specific case studies in China are a first step towards the adoption of CMS and CS into the practice of seismic safety evaluation in this country.
Frozen Gaussian approximation for 3D seismic tomography
NASA Astrophysics Data System (ADS)
Chai, Lihui; Tong, Ping; Yang, Xu
2018-05-01
Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.
The Shock and Vibration Digest. Volume 14, Number 11
1982-11-01
cooled reactor 1981) ( HTGR ) core under seismic excitation his been developed . N82-18644 The computer program can be used to predict the behavior (In...French) of the HTGR core under seismic excitation. Key Words: Computer programs , Modal analysis, Beams, Undamped structures A computation method is...30) PROGRAMMING c c Dale and Cohen [221 extended the method of McMunn and Plunkett [201 developed a compute- McMunn and Plunkett to continuous systems
Hasbrouck, W.P.
1983-01-01
Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.
Payne, Thomas G.
1982-01-01
REGIONAL MAPPER is a menu-driven system in the BASIC language for computing and plotting (1) time, depth, and average velocity to geologic horizons, (2) interval time, thickness, and interval velocity of stratigraphic intervals, and (3) subcropping and onlapping intervals at unconformities. The system consists of three programs: FILER, TRAVERSER, and PLOTTER. A control point is a shot point with velocity analysis or a shot point at or near a well with velocity check-shot survey. Reflection time to and code number of seismic horizons are filed by digitizing tablet from record sections. TRAVERSER starts at a point of geologic control and, in traversing to another, parallels seismic events, records loss of horizons by onlap and truncation, and stores reflection time for geologic horizons at traversed shot points. TRAVERSER is basically a phantoming procedure. Permafrost thickness and velocity variations, buried canyons with low-velocity fill, and error in seismically derived velocity cause velocity anomalies that complicate depth mapping. Two depths to the top of the pebble is based shale are computed for each control point. One depth, designated Zs on seismically derived velocity. The other (Zw) is based on interval velocity interpolated linearly between wells and multiplied by interval time (isochron) to give interval thickness. Z w is computed for all geologic horizons by downward summation of interval thickness. Unknown true depth (Z) to the pebble shale may be expressed as Z = Zs + es and Z = Zw + ew where the e terms represent error. Equating the two expressions gives the depth difference D = Zs + Zw = ew + es A plot of D for the top of the pebble shale is readily contourable but smoothing is required to produce a reasonably simple surface. Seismically derived velocity used in computing Zs includes the effect of velocity anomalies but is subject to some large randomly distributed errors resulting in depth errors (es). Well-derived velocity used in computing Zw does not include the effect of velocity anomalies, but the error (ew) should reflect these anomalies and should be contourable (non-random). The D surface as contoured with smoothing is assumed to represent ew, that is, the depth effect of variations in permafrost thickness and velocity and buried canyon depth. Estimated depth (Zest) to each geologic horizon is the sum of Z w for that horizon and a constant e w as contoured for the pebble shale, which is the first highly continuous seismic horizon below the zone of anomalous velocity. Results of this 'depthing' procedure are compared with those of Tetra Tech, Inc., the subcontractor responsible for geologic and geophysical interpretation and mapping.
Seismic data are rich in information about subsurface formations and fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farfour, Mohammed; Yoon, Wang Jung; Kim, Dongshin
2016-06-08
Seismic attributes are defined as any measured or computed information derived from seismic data. Throughout the last decades extensive work has been done in developing variety of mathematical approaches to extract maximum information from seismic data. Nevertheless, geoscientists found that seismic is still mature and rich in information. In this paper a new seismic attribute is introduced. Instantaneous energy seismic attribute is an amplitude based attribute that has the potential to emphasize anomalous amplitude associated with hydrocarbons. Promising results have been obtained from applying the attribute on seismic section traversing hydrocarbon filled sand from Alberta, Canada.
NASA Astrophysics Data System (ADS)
Takagi, Y.; Okubo, S.
2016-12-01
Internal co- and post-seismic deformation fields such as strain and stress changes have been modelled in order to study their effects on the subsequent earthquake and/or volcanic activity around the epicentre. When modelling strain or stress changes caused by great earthquakes (M>9.0), we should use a realistic earth model including earth's curvature and stratification; according to Toda et al.'s (2011) result, the stress changes caused by the 2011 Tohoku-oki earthquake (Mw=9.0) exceed 0.1 bar (0.01 MPa) even at the epicentral distance over 400 km. Although many works have been carried out to compute co- and post-seismic surface deformation fields using a spherically stratified viscoelastic earth (e.g. Piersanti et al. 1995; Pollitz 1996, 1997; Tanaka et al. 2006), less attention has been paid to `internal' deformation fields. Tanaka et al. (2006) succeeded in computing post-seismic surface displacements in a continuously stratified compressible viscoelastic earth by evaluating the inverse Laplace integration numerically. To our regret, however, their method cannot calculate internal deformation because they use Okubo's (1993) reciprocity theorem. We found that Okubo's (1993) reciprocity theorem can be extended to computation of internal deformation fields. In this presentation, we show a method of computing internal co- and post-seismic deformation fields and discuss the effects of earth's curvature and stratification on them.
Evaluation of Ground Vibrations Induced by Military Noise Sources
2006-08-01
1 Task 2—Determine the acoustic -to-seismic coupling coefficients C1 and C2 ...................... 1 Task 3—Computational modeling ...Determine the acoustic -to-seismic coupling coefficients C1 and C2 ....................45 Task 3—Computational modeling of acoustically induced ground...ground conditions. Task 3—Computational modeling of acoustically induced ground motion The simple model of blast sound interaction with the
Precision Seismic Monitoring of Volcanic Eruptions at Axial Seamount
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Wilcock, W. S. D.; Tolstoy, M.; Baillard, C.; Tan, Y. J.; Schaff, D. P.
2017-12-01
Seven permanent ocean bottom seismometers of the Ocean Observatories Initiative's real time cabled observatory at Axial Seamount off the coast of the western United States record seismic activity since 2014. The array captured the April 2015 eruption, shedding light on the detailed structure and dynamics of the volcano and the Juan de Fuca midocean ridge system (Wilcock et al., 2016). After a period of continuously increasing seismic activity primarily associated with the reactivation of caldera ring faults, and the subsequent seismic crisis on April 24, 2015 with 7000 recorded events that day, seismicity rates steadily declined and the array currently records an average of 5 events per day. Here we present results from ongoing efforts to automatically detect and precisely locate seismic events at Axial in real-time, providing the computational framework and fundamental data that will allow rapid characterization and analysis of spatio-temporal changes in seismogenic properties. We combine a kurtosis-based P- and S-phase onset picker and time domain cross-correlation detection and phase delay timing algorithms together with single-event and double-difference location methods to rapidly and precisely (tens of meters) compute the location and magnitudes of new events with respect to a 2-year long, high-resolution background catalog that includes nearly 100,000 events within a 5×5 km region. We extend the real-time double-difference location software DD-RT to efficiently handle the anticipated high-rate and high-density earthquake activity during future eruptions. The modular monitoring framework will allow real-time tracking of other seismic events such as tremors and sea-floor lava explosions that enable the timing and location of lava flows and thus guide response research cruises to the most interesting sites. Finally, rapid detection of eruption precursors and initiation will allow for adaptive sampling by the OOI instruments for optimal recording of future eruptions. With a higher eruption recurrence rate than land-based volcanoes the Axial OOI observatory offers the opportunity to monitor and study volcanic eruptions throughout multiple cycles.
Hasbrouck, W.P.
1983-01-01
Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language used by the Tektronix 4051 Graphic System. This report presents computer programs to perform X-square/T-square analyses and to plot normal moveout lines on a seismogram overlay.
NASA Astrophysics Data System (ADS)
Shiuly, Amit; Kumar, Vinay; Narayan, Jay
2014-06-01
This paper presents the ground motion amplification scenario along with fundamental frequency (F 0) of sedimentary deposit for the seismic microzonation of Kolkata City, situated on the world's largest delta island with very soft soil deposit. A 4th order accurate SH-wave viscoelastic finite-difference algorithm is used for computation of response of 1D model for each borehole location. Different maps, such as for F 0, amplification at F 0, average spectral amplification (ASA) in the different frequency bandwidth of earthquake engineering interest are developed for a variety of end-users communities. The obtained ASA of the order of 3-6 at most of the borehole locations in a frequency range of 0.25-10.0 Hz reveals that Kolkata City may suffer severe damage even during a moderate earthquake. Further, unexpected severe damage to collapse of multi-storey buildings may occur in localities near Hoogly River and Salt Lake area due to double resonance effects during distant large earthquakes.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
NASA Astrophysics Data System (ADS)
Mascandola, Claudia; Massa, Marco; Barani, Simone; Lovati, Sara; Santulin, Marco
2016-04-01
This work deals with the problem of long period seismic site amplification that potentially might involve large and deep alluvial basins in case of strong earthquakes. In particular, it is here presented a case study in the Po Plain (Northern Italy), one of the most extended and deep sedimentary basin worldwide. Even if the studied area shows a low annul seismicity rate with rare strong events (Mw>6.0) and it is characterized by low to medium seismic hazard conditions, the seismic risk is significant for the high density of civil and strategic infrastructures (i.e. high degree of exposition) and the unfavourable geological conditions. The aim of this work is to provide general considerations about the seismic site response of the Po Plain, with particular attention on deep discontinuities (i.e. geological bedrock), in terms of potential low frequency amplification and their incidence on the PSHA. The current results were obtained through active and passive geophysical investigations performed near Castelleone, a site where a seismic station, which is part of the INGV (National Institute for Geophysics and Volcanology) Seismic National Network, is installed from 2009. In particular, the active analyses consisted in a MASW and a refraction survey, whereas the passive ones consisted in seismic ambient noise acquisitions with single stations and arrays of increasing aperture. The results in terms of noise HVSR indicate two main peaks, the first around 0.17 Hz and the second, as already stated in the recent literature, around 0.7 Hz. In order to correlate the amplified frequencies with the geological discontinuities, the array acquisitions were processed to obtain a shear waves velocity profile, computed with a joint inversion, considering the experimental dispersion curves and the HVSR results. The obtained velocity profile shows two main discontinuities: the shallower at ~165 m of depth, which can be correlated to the seismic bedrock (i.e. Vs > 800 m/) and the deeper at ~1350 m of depth, properly associable to the geological bedrock, considering the transition between the pliocenic loose sediments and the miocenic marls observable from the available stratigraphy. Numerical 1D analyses, computed to obtain the theoretical Transfer Function at the site, support the correlation between the experimental amplification peak around 0.17 Hz and the hypothesized geological bedrock. In terms of site specific SHA, the UHS expressed in displacement (MRP: 475 years) shows a significant increase if the seismic input is located at the geological bedrock (~1350 m) instead of the seismic bedrock (~165 m). Even if this increase is not relevant for the studied site, since the seismic hazard is low, it could be significant in other part of the Po Plain, where the seismic hazard is medium-high. According to the HVSR results, obtained for other available Po Plain broadband stations, the considerations of this work could represent a warning for future seismic hazard investigations in other areas of the basin.
Seismic Response Control Of Structures Using Semi-Active and Passive Variable Stiffness Devices
NASA Astrophysics Data System (ADS)
Salem, Mohamed M. A.
Controllable devices such as Magneto-Rheological Fluid Dampers, Electro-Rheological Dampers, and controllable friction devices have been studied extensively with limited implementation in real structures. Such devices have shown great potential in reducing seismic demands, either as smart base isolation systems, or as smart devices for multistory structures. Although variable stiffness devices can be used for seismic control of structures, the vast majority of research effort has been given to the control of damping. The primary focus of this dissertation is to evaluate the seismic control of structures using semi-active and passive variable stiffness characteristics. Smart base isolation systems employing variable stiffness devices have been studied, and two semi-active control strategies are proposed. The control algorithms were designed to reduce the superstructure and base accelerations of seismically isolated structures subject to near-fault and far-field ground motions. Computational simulations of the proposed control algorithms on the benchmark structure have shown that excessive base displacements associated with the near-fault ground motions may be better mitigated with the use of variable stiffness devices. However, the device properties must be controllable to produce a wide range of stiffness changes for an effective control of the base displacements. The potential of controllable stiffness devices in limiting the base displacement due to near-fault excitation without compromising the performance of conventionally isolated structures, is illustrated. The application of passive variable stiffness devices for seismic response mitigation of multistory structures is also investigated. A stiffening bracing system (SBS) is proposed to replace the conventional bracing systems of braced frames. An optimization process for the SBS parameters has been developed. The main objective of the design process is to maintain a uniform inter-story drift angle over the building's height, which in turn would evenly distribute the seismic demand over the building. This behavior is particularly essential so that any possible damage is not concentrated in a single story. Furthermore, the proposed design ensures that additional damping devices distributed over the building's height work efficiently with their maximum design capacity, leading to a cost efficient design. An integrated and comprehensive design procedure that can be readily adopted by the current seismic design codes is proposed. An equivalent lateral force distribution is developed that shows a good agreement with the response history analyses in terms of seismic performance and demand prediction. This lateral force pattern explicitly accounts for the higher mode effect, the dynamic characteristics of the structure, the supplemental damping, and the site specific seismic hazard. Therefore, the proposed design procedure is considered as a standalone method for the design of SBS equipped buildings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.
NASA Astrophysics Data System (ADS)
Cochran, E. S.; Lawrence, J. F.; Christensen, C. M.; Chung, A. I.; Neighbors, C.; Saltzman, J.
2010-12-01
The Quake-Catcher Network (QCN) involves the community in strong motion data collection by utilizing volunteer computing techniques and low-cost MEMS accelerometers. Volunteer computing provides a mechanism to expand strong-motion seismology with minimal infrastructure costs, while promoting community participation in science. Micro-Electro-Mechanical Systems (MEMS) triaxial accelerometers can be attached to a desktop computer via USB and are internal to many laptops. Preliminary shake table tests show the MEMS accelerometers can record high-quality seismic data with instrument response similar to research-grade strong-motion sensors. QCN began distributing sensors and software to K-12 schools and the general public in April 2008 and has grown to roughly 1500 stations worldwide. We also recently tested whether sensors could be quickly deployed as part of a Rapid Aftershock Mobilization Program (RAMP) following the 2010 M8.8 Maule, Chile earthquake. Volunteers are recruited through media reports, web-based sensor request forms, as well as social networking sites. Using data collected to date, we examine whether a distributed sensing network can provide valuable seismic data for earthquake detection and characterization while promoting community participation in earthquake science. We utilize client-side triggering algorithms to determine when significant ground shaking occurs and this metadata is sent to the main QCN server. On average, trigger metadata are received within 1-10 seconds from the observation of a trigger; the larger data latencies are correlated with greater server-station distances. When triggers are detected, we determine if the triggers correlate to others in the network using spatial and temporal clustering of incoming trigger information. If a minimum number of triggers are detected then a QCN-event is declared and an initial earthquake location and magnitude is estimated. Initial analysis suggests that the estimated locations and magnitudes are similar to those reported in regional and global catalogs. As the network expands, it will become increasingly important to provide volunteers access to the data they collect, both to encourage continued participation in the network and to improve community engagement in scientific discourse related to seismic hazard. In the future, we hope to provide access to both images and raw data from seismograms in formats accessible to the general public through existing seismic data archives (e.g. IRIS, SCSN) and/or through the QCN project website. While encouraging community participation in seismic data collection, we can extend the capabilities of existing seismic networks to rapidly detect and characterize strong motion events. In addition, the dense waveform observations may provide high-resolution ground shaking information to improve source imaging and seismic risk assessment.
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
NASA Astrophysics Data System (ADS)
Huang, Yin-Nan
Nuclear power plants (NPPs) and spent nuclear fuel (SNF) are required by code and regulations to be designed for a family of extreme events, including very rare earthquake shaking, loss of coolant accidents, and tornado-borne missile impacts. Blast loading due to malevolent attack became a design consideration for NPPs and SNF after the terrorist attacks of September 11, 2001. The studies presented in this dissertation assess the performance of sample conventional and base isolated NPP reactor buildings subjected to seismic effects and blast loadings. The response of the sample reactor building to tornado-borne missile impacts and internal events (e.g., loss of coolant accidents) will not change if the building is base isolated and so these hazards were not considered. The sample NPP reactor building studied in this dissertation is composed of containment and internal structures with a total weight of approximately 75,000 tons. Four configurations of the reactor building are studied, including one conventional fixed-base reactor building and three base-isolated reactor buildings using Friction Pendulum(TM), lead rubber and low damping rubber bearings. The seismic assessment of the sample reactor building is performed using a new procedure proposed in this dissertation that builds on the methodology presented in the draft ATC-58 Guidelines and the widely used Zion method, which uses fragility curves defined in terms of ground-motion parameters for NPP seismic probabilistic risk assessment. The new procedure improves the Zion method by using fragility curves that are defined in terms of structural response parameters since damage and failure of NPP components are more closely tied to structural response parameters than to ground motion parameters. Alternate ground motion scaling methods are studied to help establish an optimal procedure for scaling ground motions for the purpose of seismic performance assessment. The proposed performance assessment procedure is used to evaluate the vulnerability of the conventional and base-isolated NPP reactor buildings. The seismic performance assessment confirms the utility of seismic isolation at reducing spectral demands on secondary systems. Procedures to reduce the construction cost of secondary systems in isolated reactor buildings are presented. A blast assessment of the sample reactor building is performed for an assumed threat of 2000 kg of TNT explosive detonated on the surface with a closest distance to the reactor building of 10 m. The air and ground shock waves produced by the design threat are generated and used for performance assessment. The air blast loading to the sample reactor building is computed using a Computational Fluid Dynamics code Air3D and the ground shock time series is generated using an attenuation model for soil/rock response. Response-history analysis of the sample conventional and base isolated reactor buildings to external blast loadings is performed using the hydrocode LS-DYNA. The spectral demands on the secondary systems in the isolated reactor building due to air blast loading are greater than those for the conventional reactor building but much smaller than those spectral demands associated with Safe Shutdown Earthquake shaking. The isolators are extremely effective at filtering out high acceleration, high frequency ground shock loading.
Using Ambient Noise for Investigating Cultural Heritage Sites and Evaluating Seismic Site Response
NASA Astrophysics Data System (ADS)
D'Amico, S.; Farrugia, D.; Galea, P. M.; Ruben, B. P., Sr.
2016-12-01
Recordings of ambient noise as well as use of the HVSR technique represent a common tool for evaluating seismic site response. In this study we applied such techniques to several cultural heritage sites located on the Maltese archipelago (Central Mediterranean). In particular, two of the Maltese watchtowers, built by the Knights of St. John between 1637 and 1659, were investigated together with the megalithic temple site of Mnajdra. Array data were acquired using the Micromed SoilSpy Rosina™ equipped with 4.5 Hz vertical geophones, setting the array in an L-shaped configuration. The Extended Spatial Autocorrelation (ESAC) technique was used to extract Rayleigh-wave dispersion curves. Moreover, single-station data close to the array was collected using a Tromino 3-component seismograph (www.tromino.eu), and the H/V curves were extracted. The dispersion curves and the H/V curves were jointly inverted using the Genetic Algorithm (GA) to obtain the shear-wave velocity profile. A fixed number of layers was used in the inversion and ranges for the layer thickness, P-wave and S-wave velocity, and density were specified. The obtained velocity profiles were used to compute the amplification function for the site based on the square root of the effective seismic impedance, also known as the quarter-wavelength approximation. This was used in the simulation of ground motion parameters at the site for various earthquakes using the stochastic one-dimensional site response analysis algorithm, Extended Source Simulation (EXSIM). In addition, the fundamental period and the damping ratio of the watchtowers was obtained by recording ambient vibrations. In the megalithic temples we were also able to evaluate the coverage of the soil deposits within the structure, comparing our results with previous study that used different geophysical techniques. In conclusion, this study enables us to map the seismic amplification hazard and provides primary data on the seismic risk assessment of each cultural heritage site.
Asymptotic co- and post-seismic displacements in a homogeneous Maxwell sphere
NASA Astrophysics Data System (ADS)
Tang, He; Sun, Wenke
2018-07-01
The deformations of the Earth caused by internal and external forces are usually expressed through Green's functions or the superposition of normal modes, that is, via numerical methods, which are applicable for computing both co- and post-seismic deformations. It is difficult to express these deformations in an analytical form, even for a uniform viscoelastic sphere. In this study, we present a set of asymptotic solutions for computing co- and post-seismic displacements; these solutions can be further applied to solving co- and post-seismic geoid, gravity and strain changes. Expressions are derived for a uniform Maxwell Earth by combining the reciprocity theorem, which links earthquake, tidal, shear and loading deformations, with the asymptotic solutions of these three external forces (tidal, shear and loading) and analytical inverse Laplace transformation formulae. Since the asymptotic solutions are given in a purely analytical form without series summations or extra convergence skills, they can be practically applied in an efficient way, especially when computing post-seismic deformations and glacial isotactic adjustments of the Earth over long timescales.
Asymptotic Co- and Post-seismic displacements in a homogeneous Maxwell sphere
NASA Astrophysics Data System (ADS)
Tang, He; Sun, Wenke
2018-05-01
The deformations of the Earth caused by internal and external forces are usually expressed through Green's functions or the superposition of normal modes, i.e. via numerical methods, which are applicable for computing both co- and post-seismic deformations. It is difficult to express these deformations in an analytical form, even for a uniform viscoelastic sphere. In this study, we present a set of asymptotic solutions for computing co- and post-seismic displacements; these solutions can be further applied to solving co- and post-seismic geoid, gravity, and strain changes. Expressions are derived for a uniform Maxwell Earth by combining the reciprocity theorem, which links earthquake, tidal, shear and loading deformations, with the asymptotic solutions of these three external forces (tidal, shear and loading) and analytical inverse Laplace transformation formulae. Since the asymptotic solutions are given in a purely analytical form without series summations or extra convergence skills, they can be practically applied in an efficient way, especially when computing post-seismic deformations and glacial isotactic adjustments of the Earth over long timescales.
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
Introducing Seismic Tomography with Computational Modeling
NASA Astrophysics Data System (ADS)
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
Object Classification Based on Analysis of Spectral Characteristics of Seismic Signal Envelopes
NASA Astrophysics Data System (ADS)
Morozov, Yu. V.; Spektor, A. A.
2017-11-01
A method for classifying moving objects having a seismic effect on the ground surface is proposed which is based on statistical analysis of the envelopes of received signals. The values of the components of the amplitude spectrum of the envelopes obtained applying Hilbert and Fourier transforms are used as classification criteria. Examples illustrating the statistical properties of spectra and the operation of the seismic classifier are given for an ensemble of objects of four classes (person, group of people, large animal, vehicle). It is shown that the computational procedures for processing seismic signals are quite simple and can therefore be used in real-time systems with modest requirements for computational resources.
Python Waveform Cross-Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, Dennise
PyWCC is a tool to compute seismic waveform cross-correlation coefficients on single-component or multiple-component seismic data across a network of seismic sensors. PyWCC compares waveform data templates with continuous seismic data, associates the resulting detections, identifies the template with the highest cross-correlation coefficient, and outputs a catalog of detections above a user-defined absolute cross-correlation threshold value.
Salzer, Jacqueline T.; Thelen, Weston A.; James, Mike R.; Walter, Thomas R.; Moran, Seth C.; Denlinger, Roger P.
2016-01-01
The surface deformation field measured at volcanic domes provides insights into the effects of magmatic processes, gravity- and gas-driven processes, and the development and distribution of internal dome structures. Here we study short-term dome deformation associated with earthquakes at Mount St. Helens, recorded by a permanent optical camera and seismic monitoring network. We use Digital Image Correlation (DIC) to compute the displacement field between successive images and compare the results to the occurrence and characteristics of seismic events during a 6 week period of dome growth in 2006. The results reveal that dome growth at Mount St. Helens was repeatedly interrupted by short-term meter-scale downward displacements at the dome surface, which were associated in time with low-frequency, large-magnitude seismic events followed by a tremor-like signal. The tremor was only recorded by the seismic stations closest to the dome. We find a correlation between the magnitudes of the camera-derived displacements and the spectral amplitudes of the associated tremor. We use the DIC results from two cameras and a high-resolution topographic model to derive full 3-D displacement maps, which reveals internal dome structures and the effect of the seismic activity on daily surface velocities. We postulate that the tremor is recording the gravity-driven response of the upper dome due to mechanical collapse or depressurization and fault-controlled slumping. Our results highlight the different scales and structural expressions during growth and disintegration of lava domes and the relationships between seismic and deformation signals.
Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing
NASA Astrophysics Data System (ADS)
Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.
2014-12-01
After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.
Towards Coupling of Macroseismic Intensity with Structural Damage Indicators
NASA Astrophysics Data System (ADS)
Kouteva, Mihaela; Boshnakov, Krasimir
2016-04-01
Knowledge on basic data of ground motion acceleration time histories during earthquakes is essential to understanding the earthquake resistant behaviour of structures. Peak and integral ground motion parameters such as peak ground motion values (acceleration, velocity and displacement), measures of the frequency content of ground motion, duration of strong shaking and various intensity measures play important roles in seismic evaluation of existing facilities and design of new systems. Macroseismic intensity is an earthquake measure related to seismic hazard and seismic risk description. Having detailed ideas on the correlations between the earthquake damage potential and macroseismic intensity is an important issue in engineering seismology and earthquake engineering. Reliable earthquake hazard estimation is the major prerequisite to successful disaster risk management. The usage of advanced earthquake engineering approaches for structural response modelling is essential for reliable evaluation of the accumulated damages in the existing buildings and structures due to the history of seismic actions, occurred during their lifetime. Full nonlinear analysis taking into account single event or series of earthquakes and the large set of elaborated damage indices are suitable contemporary tools to cope with this responsible task. This paper presents some results on the correlation between observational damage states, ground motion parameters and selected analytical damage indices. Damage indices are computed on the base of nonlinear time history analysis of test reinforced structure, characterising the building stock of the Mediterranean region designed according the earthquake resistant requirements in mid XX-th century.
NASA Astrophysics Data System (ADS)
Veeraian, Parthasarathi; Gandhi, Uma; Mangalanathan, Umapathy
2018-04-01
Seismic transducers are widely used for measurement of displacement, velocity, and acceleration. This paper presents the design of seismic transducer in the fractional domain for the measurement of displacement and acceleration. The fractional order transfer function for seismic displacement and acceleration transducer are derived using Grünwald-Letnikov derivative. Frequency response analysis of fractional order seismic displacement transducer (FOSDT) and fractional order seismic acceleration transducer (FOSAT) are carried out for different damping ratio with the different fractional order, and the maximum dynamic measurement range is identified. The results demonstrate that fractional order seismic transducer has increased dynamic measurement range and less phase distortion as compared to the conventional seismic transducer even with a lower damping ratio. Time response of FOSDT and FOSAT are derived analytically in terms of Mittag-Leffler function, the effect of fractional behavior in the time domain is evaluated from the impulse and step response. The fractional order system is found to have significantly reduced overshoot as compared to the conventional transducer. The fractional order seismic transducer design proposed in this paper is illustrated with a design example for FOSDT and FOSAT. Finally, an electrical equivalent of FOSDT and FOSAT is considered, and its frequency response is found to be in close agreement with the proposed fractional order seismic transducer.
NASA Astrophysics Data System (ADS)
Zollo, Aldo
2016-04-01
RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude of the equivalent Wood-Anderson displacement recordings. The moment magnitude (Mw) is then estimated from the inversion of displacement spectra. The duration magnitude (Md) is rapidly computed, based on a simple and automatic measurement of the seismic wave coda duration. Starting from the magnitude estimates, other relevant pieces of information are also computed, such as the corner frequency, the seismic moment, the source radius and the seismic energy. The ground-shaking maps on a Google map are produced, for peak ground acceleration (PGA), peak ground velocity (PGV) and instrumental intensity (in SHAKEMAP® format), or a plot of the measured peak ground values. Furthermore, based on a specific decisional scheme, the automatic discrimination between local earthquakes occurred within the network and regional/teleseismic events occurred outside the network is performed. Finally, for largest events, if a consistent number of P-wave polarity reading are available, the focal mechanism is also computed. For each event, all of the available pieces of information are stored in a local database and the results of the automatic analyses are published on an interactive web page. "The Bulletin" shows a map with event location and stations, as well as a table listing all the events, with the associated parameters. The catalogue fields are the event ID, the origin date and time, latitude, longitude, depth, Ml, Mw, Md, the number of triggered stations, the S-displacement spectra, and shaking maps. Some of these entries also provide additional information, such as the focal mechanism (when available). The picked traces are uploaded in the database and from the web interface of the Bulletin the traces can be download for more specific analysis. This innovative software represents a smart solution, with a friendly and interactive interface, for high-level analysis of seismic data analysis and it may represent a relevant tool not only for seismologists, but also for non-expert external users who are interested in the seismological data. The software is a valid tool for the automatic analysis of the background seismicity at different time scales and can be a relevant tool for the monitoring of both natural and induced seismicity.
Aerospace technology can be applied to exploration 'back on earth'. [offshore petroleum resources
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1977-01-01
Applications of aerospace technology to petroleum exploration are described. Attention is given to seismic reflection techniques, sea-floor mapping, remote geochemical sensing, improved drilling methods and down-hole acoustic concepts, such as down-hole seismic tomography. The seismic reflection techniques include monitoring of swept-frequency explosive or solid-propellant seismic sources, as well as aerial seismic surveys. Telemetry and processing of seismic data may also be performed through use of aerospace technology. Sea-floor sonor imaging and a computer-aided system of geologic analogies for petroleum exploration are also considered.
MSNoise: a Python Package for Monitoring Seismic Velocity Changes using Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Lecocq, T.; Caudron, C.; Brenguier, F.
2013-12-01
Earthquakes occur every day all around the world and are recorded by thousands of seismic stations. In between earthquakes, stations are recording "noise". In the last 10 years, the understanding of this noise and its potential usage have been increasing rapidly. The method, called "seismic interferometry", uses the principle that seismic waves travel between two recorders and are multiple-scattered in the medium. By cross-correlating the two records, one gets an information on the medium below/between the stations. The cross-correlation function (CCF) is a proxy to the Green Function of the medium. Recent developments of the technique have shown those CCF can be used to image the earth at depth (3D seismic tomography) or study the medium changes with time. We present MSNoise, a complete software suite to compute relative seismic velocity changes under a seismic network, using ambient seismic noise. The whole is written in Python, from the monitoring of data archives, to the production of high quality figures. All steps have been optimized to only compute the necessary steps and to use 'job'-based processing. We present a validation of the software on a dataset acquired during the UnderVolc[1] project on the Piton de la Fournaise Volcano, La Réunion Island, France, for which precursory relative changes of seismic velocity are visible for three eruptions betwee 2009 and 2011.
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.
2014-12-01
Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.
Continuous Seismic Threshold Monitoring
1992-05-31
Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic
NASA Astrophysics Data System (ADS)
Huerta, F. V.; Granados, I.; Aguirre, J.; Carrera, R. Á.
2017-12-01
Nowadays, in hydrocarbon industry, there is a need to optimize and reduce exploration costs in the different types of reservoirs, motivating the community specialized in the search and development of alternative exploration geophysical methods. This study show the reflection response obtained from a shale gas / oil deposit through the method of seismic interferometry of ambient vibrations in combination with Wavelet analysis and conventional seismic reflection techniques (CMP & NMO). The method is to generate seismic responses from virtual sources through the process of cross-correlation of records of Ambient Seismic Vibrations (ASV), collected in different receivers. The seismic response obtained is interpreted as the response that would be measured in one of the receivers considering a virtual source in the other. The acquisition of ASV records was performed in northern of Mexico through semi-rectangular arrays of multi-component geophones with instrumental response of 10 Hz. The in-line distance between geophones was 40 m while in cross-line was 280 m, the sampling used during the data collection was 2 ms and the total duration of the records was 6 hours. The results show the reflection response of two lines in the in-line direction and two in the cross-line direction for which the continuity of coherent events have been identified and interpreted as reflectors. There is certainty that the events identified correspond to reflections because the time-frequency analysis performed with the Wavelet Transform has allowed to identify the frequency band in which there are body waves. On the other hand, the CMP and NMO techniques have allowed to emphasize and correct the reflection response obtained during the correlation processes in the frequency band of interest. The results of the processing and analysis of ASV records through the seismic interferometry method have allowed us to see interesting results in light of the cross-correlation process in combination with the Wavelet analysis and conventional seismic reflection techniques. Therefore it was possible to recover the seismic response on each analyzed source-receiver pair, allowing us to obtain the reflection response of each analyzed seismic line.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Automatic Seismic-Event Classification with Convolutional Neural Networks.
NASA Astrophysics Data System (ADS)
Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.
2017-12-01
Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and location of seismic events.
Regional comparisons of Vs30 and Spectral Ratio Methods
NASA Astrophysics Data System (ADS)
McNamara, D. E.; Gee, L. S.; Stephenson, W. J.; Odum, J. K.; Williams, R. A.; Hartzell, S.
2013-12-01
Earthquake damage is often increased due to local ground-motion amplification in soft soils and thick basin sediments with factors such as topographic effects and water saturation. Seismic hazard assessments depend on detailed information on local site response and many different methods have been developed to estimate site response. Based on numerous empirical studies, the average shear-wave velocity in the upper 30 m (Vs30) has become the most common means of classifying site conditions and has been adopted in the NEHRP design provisions for new buildings. In general, higher Vs30 values are associated with firm, dense rock and lower levels of ground shaking while lower Vs30 values are associated with softer soils and high site amplification. Vs30 is commonly computed by measuring the time it takes for shear-waves to travel from 30m depth to the surface using either active sources such as explosions or passive ambient noise microtremor sources. Since this approach is limited to locations where active measurements are undertaken, recent methods have sought to approximate Vs30 regionally, such as using topographic slope as a proxy. In this presentation, we compute a standard site response, horizontal-to-vertical spectral ratio (HVSR) using long-term power spectral density statistics of both ambient noise and earthquake signals at permanent and temporary seismic stations. We compare the HVSR results to surface observations of Vs30 and approximations using topographic slope in several different regions including the Eastern United States, St. Louis and the Los Angeles basin. In our comparison of the HVSR results to Vs30, we find that HVSR peak frequency can be used as a proxy for Vs30. Relationships between surface measured Vs30 and HVSR are less scattered than with Vs30 estimated using topographic approximations. In general, higher Vs30 is associated with higher HVSR peak frequency with variations in slope for different regions. We use these regional relationships to estimate NEHRP soil class at over 200 seismic stations in the US.
FORTRAN programs for calculating nonlinear seismic ground response in two dimensions
Joyner, W.B.
1978-01-01
The programs described here were designed for calculating the nonlinear seismic response of a two-dimensional configuration of soil underlain by a semi-infinite elastic medium representing bedrock. There are two programs. One is for plane strain motions, that is, motions in the plane perpendicular to the long axis of the structure, and the other is for antiplane strain motions, that is motions parallel to the axis. The seismic input is provided by specifying what the motion of the rock-soil boundary would be if the soil were absent and the boundary were a free surface. This may be done by supplying a magnetic tape containing the values of particle velocity for every boundary point at every instant of time. Alternatively, a punch card deck may be supplied giving acceleration values at every instant of time. In the plane strain program it is assumed that the acceleration values apply simultaneously to every point on the boundary; in the antiplane strain program it is assumed that the acceleration values characterize a plane shear wave propagating upward in the underlying elastic medium at a specified angle with the vertical. The nonlinear hysteretic behavior of the soil is represented by a three-dimensional rheological model. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. Computations are done in terms of stress departures from an unspecified initial state. Source listings are provided here along with instructions for preparing the input. A more detailed discussion of the method is presented elsewhere.
Monitoring the tidal response of a sea levee with ambient seismic noise
NASA Astrophysics Data System (ADS)
Planès, Thomas; Rittgers, Justin B.; Mooney, Michael A.; Kanning, Wim; Draganov, Deyan
2017-03-01
Internal erosion, a major cause of failure of earthen dams and levees, is often difficult to detect at early stages using traditional visual inspection. The passive seismic-interferometry technique could enable the early detection of internal changes taking place within these structures. We test this technique on a portion of the sea levee of Colijnsplaat, Netherlands, which presents signs of concentrated seepage in the form of sandboils. Applying seismic interferometry to ambient noise collected over a 12-hour period, we retrieve surface waves propagating along the levee. We identify the contribution of two dominant ambient seismic noise sources: the traffic on the Zeeland bridge and a nearby wind turbine. Here, the sea-wave action does not constitute a suitable noise source for seismic interferometry. Using the retrieved surface waves, we compute time-lapse variations of the surface-wave group velocities during the 12-hour tidal cycle for different frequency bands, i.e., for different depth ranges. The estimated group-velocity variations correlate with variations in on-site pore-water pressure measurements that respond to tidal loading. We present lateral profiles of these group-velocity variations along a 180-meter section of the levee, at four different depth ranges (0m-40m). On these profiles, we observe some spatially localized relative group-velocity variations of up to 5% that might be related to concentrated seepage.
NASA Astrophysics Data System (ADS)
Zeng, Zhi-Ping; Zhao, Yan-Gang; Xu, Wen-Tao; Yu, Zhi-Wu; Chen, Ling-Kun; Lou, Ping
2015-04-01
The frequent use of bridges in high-speed railway lines greatly increases the probability that trains are running on bridges when earthquakes occur. This paper investigates the random vibrations of a high-speed train traversing a slab track on a continuous girder bridge subjected to track irregularities and traveling seismic waves by the pseudo-excitation method (PEM). To derive the equations of motion of the train-slab track-bridge interaction system, the multibody dynamics and finite element method models are used for the train and the track and bridge, respectively. By assuming track irregularities to be fully coherent random excitations with time lags between different wheels and seismic accelerations to be uniformly modulated, non-stationary random excitations with time lags between different foundations, the random load vectors of the equations of motion are transformed into a series of deterministic pseudo-excitations based on PEM and the wheel-rail contact relationship. A computer code is developed to obtain the time-dependent random responses of the entire system. As a case study, the random vibration characteristics of an ICE-3 high-speed train traversing a seven-span continuous girder bridge simultaneously excited by track irregularities and traveling seismic waves are analyzed. The influence of train speed and seismic wave propagation velocity on the random vibration characteristics of the bridge and train are discussed.
Inelastic seismic response of precast concrete frames with constructed plastic hinges
NASA Astrophysics Data System (ADS)
Sucuoglu, H.
1995-07-01
A modified seismic design concept is introduced for precast concrete frames in which beam plastic hinges with reduced yield capacities are constructed away from the precast beam-column connections arranged at the column faces. Plastic hinge location and yield capacity are employed as the basic parameters of an analytical survey in which the inelastic dynamic responses of a conventional precast frame and its modified counterparts are calculated and compared under two earthquake excitations by using a general purpose computer program for dynamic analysis of inelastic frames (left bracket) 1, 2 (right bracket). An optimum design is obtained by providing plastic hinges on precast beams located at one depth away from the beam ends, in which primary (negative) bending moment yield capacities are reduced between one-third and one-quarter of the beam design end moments. With such plastic hinge configurations, precast beam-column connections at the column faces can be designed to remain elastic under strong earthquake excitations.
Impact of Topography on Seismic Amplification During the 2005 Kashmir Earthquake
NASA Astrophysics Data System (ADS)
Khan, S.; van der Meijde, M.; van der Werff, H.; Shafique, M.
2016-12-01
This study assesses topographic amplification of seismic response during the 2005 Kashmir Earthquake in northern Pakistan. Topography scatters seismic waves, which causes variation in seismic response on the surface of the earth. During the Kashmir earthquake, topography induced amplification was suspected to have had major influence on the damage of infrastructure. We did a 3-dimensional simulation of the event using SPECFEM3D software. We first analyzed the impact of data resolution (mesh and Digital Elevation Model) on the derived seismic response. ASTER GDEM elevation data was used to build a 3D finite element mesh, and the parameters (latitude, longitude, depth, moment tensor) of the Kashmir earthquake were used in simulating the event. Our results show amplification of seismic response on ridges and de-amplification in valleys. It was also found that slopes facing away from the source receive an amplified seismic response when compared to slopes facing towards the source. The PGD would regularly fall within the range 0.23-5.8 meters. The topographic amplification causes local changes in the range of -2.50 to +3.50 meters; causing the PGD to fall in the range of 0.36-7.85 meters.
Temporal Delineation and Quantification of Short Term Clustered Mining Seismicity
NASA Astrophysics Data System (ADS)
Woodward, Kyle; Wesseloo, Johan; Potvin, Yves
2017-07-01
The assessment of the temporal characteristics of seismicity is fundamental to understanding and quantifying the seismic hazard associated with mining, the effectiveness of strategies and tactics used to manage seismic hazard, and the relationship between seismicity and changes to the mining environment. This article aims to improve the accuracy and precision in which the temporal dimension of seismic responses can be quantified and delineated. We present a review and discussion on the occurrence of time-dependent mining seismicity with a specific focus on temporal modelling and the modified Omori law (MOL). This forms the basis for the development of a simple weighted metric that allows for the consistent temporal delineation and quantification of a seismic response. The optimisation of this metric allows for the selection of the most appropriate modelling interval given the temporal attributes of time-dependent mining seismicity. We evaluate the performance weighted metric for the modelling of a synthetic seismic dataset. This assessment shows that seismic responses can be quantified and delineated by the MOL, with reasonable accuracy and precision, when the modelling is optimised by evaluating the weighted MLE metric. Furthermore, this assessment highlights that decreased weighted MLE metric performance can be expected if there is a lack of contrast between the temporal characteristics of events associated with different processes.
The effect of yield strength and ductility to fatigue damage
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1973-01-01
The cumulative damage of aluminium alloys with different yield strength and various ductility due to seismic loads was studied. The responses of an idealized beam with a centered mass at one end and fixed at the other end to El Centro's and Taft's earthquakes are computed by assuming that the alloys are perfectly elastoplastic materials and by using numerical technique. Consequently, the corresponding residual plastic strain can be obtained from the stress-strain relationship. The revised Palmgren-Miner cumulative damage theorem is utilized to calculate the fatigue damage. The numerical results show that in certain cases, the high ductility materials are more resistant to seismic loads than the high yield strength materials. The results also show that if a structure collapse during the earthquake, the collapse always occurs in the very early stage.
Multi-hazards risk assessment at different levels
NASA Astrophysics Data System (ADS)
Frolova, N.; Larionov, V.; Bonnin, J.
2012-04-01
Natural and technological disasters are becoming more frequent and devastating. Social and economic losses due to those events increase annually, which is definitely in relation with evolution of society. Natural hazards identification and analysis, as well natural risk assessment taking into account secondary technological accidents are the first steps in prevention strategy aimed at saving lives and protecting property against future events. The paper addresses methodological issues of natural and technological integrated risk assessment and mapping at different levels [1, 2]. At the country level the most hazardous natural processes, which may results in fatalities, injuries and economic loss in the Russian Federation, are considered. They are earthquakes, landslides, mud flows, floods, storms, avalanches. The special GIS environment for the country territory was developed which includes information about hazards' level and reoccurrence, an impact databases for the last 20 years, as well as models for estimating damage and casualties caused by these hazards. Federal maps of seismic individual and collective risk, as well as multi-hazards natural risk maps are presented. The examples of regional seismic risk assessment taking into account secondary accidents at fire, explosion and chemical hazardous facilities and regional integrated risk assessment are given for the earthquake prone areas of the Russian Federation. The paper also gives examples of loss computations due to scenario earthquakes taking into account accidents trigged by strong events at critical facilities: fire and chemical hazardous facilities, including oil pipe lines routes located in the earthquake prone areas. The estimations of individual seismic risk obtained are used by EMERCOM of the Russian Federation, as well as by other federal and local authorities, for planning and implementing preventive measures, aimed at saving lives and protecting property against future disastrous events. The results also allow to develop effective emergency response plans taking into account possible scenario events. Taking into consideration the size of the oil pipe line systems located in the highly active seismic zones, the results of seismic risk computation are used by TRANSNEFT JSC.
A normal mode treatment of semi-diurnal body tides on an aspherical, rotating and anelastic Earth
NASA Astrophysics Data System (ADS)
Lau, Harriet C. P.; Yang, Hsin-Ying; Tromp, Jeroen; Mitrovica, Jerry X.; Latychev, Konstantin; Al-Attar, David
2015-08-01
Normal mode treatments of the Earth's body tide response were developed in the 1980s to account for the effects of Earth rotation, ellipticity, anelasticity and resonant excitation within the diurnal band. Recent space-geodetic measurements of the Earth's crustal displacement in response to luni-solar tidal forcings have revealed geographical variations that are indicative of aspherical deep mantle structure, thus providing a novel data set for constraining deep mantle elastic and density structure. In light of this, we make use of advances in seismic free oscillation literature to develop a new, generalized normal mode theory for the tidal response within the semi-diurnal and long-period tidal band. Our theory involves a perturbation method that permits an efficient calculation of the impact of aspherical structure on the tidal response. In addition, we introduce a normal mode treatment of anelasticity that is distinct from both earlier work in body tides and the approach adopted in free oscillation seismology. We present several simple numerical applications of the new theory. First, we compute the tidal response of a spherically symmetric, non-rotating, elastic and isotropic Earth model and demonstrate that our predictions match those based on standard Love number theory. Second, we compute perturbations to this response associated with mantle anelasticity and demonstrate that the usual set of seismic modes adopted for this purpose must be augmented by a family of relaxation modes to accurately capture the full effect of anelasticity on the body tide response. Finally, we explore aspherical effects including rotation and we benchmark results from several illustrative case studies of aspherical Earth structure against independent finite-volume numerical calculations of the semi-diurnal body tide response. These tests confirm the accuracy of the normal mode methodology to at least the level of numerical error in the finite-volume predictions. They also demonstrate that full coupling of normal modes, rather than group coupling, is necessary for accurate predictions of the body tide response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kot, C.A.; Srinivasan, M.G.; Hsieh, B.J.
As part of the Phase II testing at the HDR Test Facility in Kahl/Main, FRG, two series of high-level seismic/vibrational experiments were performed. In the first of these (SHAG) a coast-down shaker, mounted on the reactor operating floor and capable of generating 1000 tonnes of force, was used to investigate full-scale structural response, soil-structure interaction (SSI), and piping/equipment response at load levels equivalent to those of a design basis earthquake. The HDR soil/structure system was tested to incipient failure exhibiting highly nonlinear response. In the load transmission from structure to piping/equipment significant response amplifications and shifts to higher frequencies occurred.more » The performance of various pipe support configurations was evaluated. This latter effort was continued in the second series of tests (SHAM), in which an in-plant piping system was investigated at simulated seismic loads (generated by two servo-hydraulic actuators each capable of generating 40 tonnes of force), that exceeded design levels manifold and resulted in considerable pipe plastification and failure of some supports (snubbers). The evaluation of six different support configurations demonstrated that proper system design (for a given spectrum) rather than number of supports or system stiffness is essential to limiting pipe stresses. Pipe strains at loads exceeding the design level eightfold were still tolerable, indicating that pipe failure even under extreme seismic loads is unlikely inspite of multiple support failures. Conservatively, an excess capacity (margin) of at least four was estimated for the piping system, and the pipe damping was found to be 4%. Comparisons of linear and nonlinear computational results with measurements showed that analytical predictions have wide scatter and do not necessarily yield conservative responses, underpredicting, in particular, peak support forces.« less
NASA Astrophysics Data System (ADS)
Langston, C. A.
2017-12-01
The seismic wave gradient tensor can be derived from a variety of field observations including measurements of the wavefield by a dense seismic array, strain meters, and rotation meters. Coupled with models of wave propagation, wave gradients along with the original wavefield can give estimates of wave attributes that can be used to infer wave propagation directions, apparent velocities, spatial amplitude behavior, and wave type. Compact geodetic arrays with apertures of 0.1 wavelength or less can be deployed to provide wavefield information at a localized spot similar to larger phased arrays with apertures of many wavelengths. Large N, spatially distributed arrays can provide detailed information over an area to detect structure changes. Key to accurate computation of spatial gradients from arrays of seismic instruments is knowledge of relative instrument responses, particularly component sensitivities and gains, along with relative sensor orientations. Array calibration has been successfully performed for the 14-element Pinyon Flat, California, broadband array using long-period teleseisms to achieve relative precisions as small as 0.2% in amplitude and 0.35o in orientation. Calibration has allowed successful comparison of horizontal seismic strains from local and regional seismic events with the Plate Boundary Observatory (PBO) borehole strainmeter located at the facility. Strains from the borehole strainmeter in conjunction with ground velocity from a co-located seismometer are used as a "point" array in estimating wave attributes for the P-SV components of the wavefield. An effort is underway to verify the calibration of PBO strainmeters in southern California and their co-located borehole seismic sensors to create an array of point arrays for use in studies of regional wave propagation and seismic sources.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
Reyes, Juan C.; Kalkan, Erol
2012-01-01
In the United States, regulatory seismic codes (for example, California Building Code) require at least two sets of horizontal ground-motion components for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 kilometers (3.1 miles) of an active fault, these records should be rotated to fault-normal and fault-parallel (FN/FP) directions, and two RHAs should be performed separately—when FN and then FP direction are aligned with transverse direction of the building axes. This approach is assumed to lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. The validity of this assumption is examined here using 3D computer models of single-story structures having symmetric (torsionally stiff) and asymmetric (torsionally flexible) layouts subjected to an ensemble of near-fault ground motions with and without apparent velocity pulses. In this parametric study, the elastic vibration period is varied from 0.2 to 5 seconds, and yield-strength reduction factors, R, are varied from a value that leads to linear-elastic design to 3 and 5. Further validations are performed using 3D computer models of 9-story structures having symmetric and asymmetric layouts subjected to the same ground-motion set. The influence of the ground-motion rotation angle on several engineering demand parameters (EDPs) is examined in both linear-elastic and nonlinear-inelastic domains to form benchmarks for evaluating the use of the FN/FP directions and also the maximum direction (MD). The MD ground motion is a new definition for horizontal ground motions for use in site-specific ground-motion procedures for seismic design according to provisions of the American Society of Civil Engineers/Seismic Engineering Institute (ASCE/SEI) 7-10. The results of this study have important implications for current practice, suggesting that ground motions rotated to MD or FN/FP directions do not necessarily provide the most critical EDPs in nonlinear-inelastic domain; however, they tend to produce larger EDPs than as-recorded (arbitrarily oriented) motions.
NASA Astrophysics Data System (ADS)
Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.
2013-12-01
Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can be paired with infrastructure overlays, allowing emergency response teams to identify sites that may have been exposed to damage. The faults will also be incorporated into a database for future integration into fault models and earthquake simulations, improving future earthquake hazard assessment. As new faults are mapped, they will further understanding of the complex fault systems and earthquake hazards within the seismically dynamic state of California.
A Software Toolbox for Systematic Evaluation of Seismometer-Digitizer System Responses
2011-09-01
characteristics (e.g., borehole vs. surface installation) instead of the actual seismic noise characteristics. Their results suggest that our best...Administration Award No. DE-FG02-09ER85548 ABSTRACT Measurement of the absolute amplitudes of a seismic signal requires accurate knowledge of...estimates seismic noise power spectral densities, and NOISETRAN, which generates a pseudo-amplitude response (PAR) for a seismic station, based on
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
NASA Astrophysics Data System (ADS)
Zhou, Bing; Greenhalgh, S. A.
2011-10-01
2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.
Pratt, Thomas L.; Meagher, Karen L.; Brocher, Thomas M.; Yelin, Thomas; Norris, Robert; Hultgrien, Lynn; Barnett, Elizabeth; Weaver, Craig S.
2003-01-01
This report describes seismic data obtained during the fourth Seismic Hazard Investigation of Puget Sound (SHIPS) experiment, termed Seattle SHIPS . The experiment was designed to study the influence of the Seattle sedimentary basin on ground shaking during earthquakes. To accomplish this, we deployed seismometers over the basin to record local earthquakes, quarry blasts, and teleseisms during the period of January 26 to May 27, 2002. We plan to analyze the recordings to compute spectral amplitudes at each site, to determine the variability of ground motions over the basin. During the Seattle SHIPS experiment, seismometers were deployed at 87 sites in a 110-km-long east-west line, three north-south lines, and a grid throughout the Seattle urban area (Figure 1). At each of these sites, an L-22, 2-Hz velocity transducer was installed and connected to a REF TEK Digital Acquisition System (DAS), both provided by the Program for Array Seismic Studies of the Continental Lithosphere (PASSCAL) of the Incorporated Research Institutes for Seismology (IRIS). The instruments were installed on January 26 and 27, and were retrieved gradually between April 18 and May 27. All instruments continuously sampled all three components of motion (velocity) at a sample rate of 50 samples/sec. To ensure accurate computations of amplitude, we calibrated the geophones in situ to obtain the instrument responses. In this report, we discuss the acquisition of these data, we describe the processing and merging of these data into 1-hour long traces and into windowed events, we discuss the geophone calibration process and its results, and we display some of the earthquake recordings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Dreger, Douglas S.; Pitarka, Arben
We performed three-dimensional (3D) anelastic ground motion simulations of the South Napa earthquake to investigate the performance of different finite rupture models and the effects of 3D structure on the observed wavefield. We considered rupture models reported by Dreger et al. (2015), Ji et al., (2015), Wei et al. (2015) and Melgar et al. (2015). We used the SW4 anelastic finite difference code developed at Lawrence Livermore National Laboratory (Petersson and Sjogreen, 2013) and distributed by the Computational Infrastructure for Geodynamics. This code can compute the seismic response for fully 3D sub-surface models, including surface topography and linear anelasticity. Wemore » use the 3D geologic/seismic model of the San Francisco Bay Area developed by the United States Geological Survey (Aagaard et al., 2008, 2010). Evaluation of earlier versions of this model indicated that the structure can reproduce main features of observed waveforms from moderate earthquakes (Rodgers et al., 2008; Kim et al., 2010). Simulations were performed for a domain covering local distances (< 25 km) and resolution providing simulated ground motions valid to 1 Hz.« less
Assessment of seismic design response factors of concrete wall buildings
NASA Astrophysics Data System (ADS)
Mwafy, Aman
2011-03-01
To verify the seismic design response factors of high-rise buildings, five reference structures, varying in height from 20- to 60-stories, were selected and designed according to modern design codes to represent a wide range of concrete wall structures. Verified fiber-based analytical models for inelastic simulation were developed, considering the geometric nonlinearity and material inelasticity of the structural members. The ground motion uncertainty was accounted for by employing 20 earthquake records representing two seismic scenarios, consistent with the latest understanding of the tectonic setting and seismicity of the selected reference region (UAE). A large number of Inelastic Pushover Analyses (IPAs) and Incremental Dynamic Collapse Analyses (IDCAs) were deployed for the reference structures to estimate the seismic design response factors. It is concluded that the factors adopted by the design code are adequately conservative. The results of this systematic assessment of seismic design response factors apply to a wide variety of contemporary concrete wall buildings with various characteristics.
A Bayesian Approach to Real-Time Earthquake Phase Association
NASA Astrophysics Data System (ADS)
Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.
2014-12-01
Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.
Matching pursuit parallel decomposition of seismic data
NASA Astrophysics Data System (ADS)
Li, Chuanhui; Zhang, Fanchang
2017-07-01
In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busbey, A.B.
Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.
An efficient repeating signal detector to investigate earthquake swarms
NASA Astrophysics Data System (ADS)
Skoumal, Robert J.; Brudzinski, Michael R.; Currie, Brian S.
2016-08-01
Repetitive earthquake swarms have been recognized as key signatures in fluid injection induced seismicity, precursors to volcanic eruptions, and slow slip events preceding megathrust earthquakes. We investigate earthquake swarms by developing a Repeating Signal Detector (RSD), a computationally efficient algorithm utilizing agglomerative clustering to identify similar waveforms buried in years of seismic recordings using a single seismometer. Instead of relying on existing earthquake catalogs of larger earthquakes, RSD identifies characteristic repetitive waveforms by rapidly identifying signals of interest above a low signal-to-noise ratio and then grouping based on spectral and time domain characteristics, resulting in dramatically shorter processing time than more exhaustive autocorrelation approaches. We investigate seismicity in four regions using RSD: (1) volcanic seismicity at Mammoth Mountain, California, (2) subduction-related seismicity in Oaxaca, Mexico, (3) induced seismicity in Central Alberta, Canada, and (4) induced seismicity in Harrison County, Ohio. In each case, RSD detects a similar or larger number of earthquakes than existing catalogs created using more time intensive methods. In Harrison County, RSD identifies 18 seismic sequences that correlate temporally and spatially to separate hydraulic fracturing operations, 15 of which were previously unreported. RSD utilizes a single seismometer for earthquake detection which enables seismicity to be quickly identified in poorly instrumented regions at the expense of relying on another method to locate the new detections. Due to the smaller computation overhead and success at distances up to ~50 km, RSD is well suited for real-time detection of low-magnitude earthquake swarms with permanent regional networks.
Long Duration of Ground Motion in the Paradigmatic Valley of Mexico
NASA Astrophysics Data System (ADS)
Cruz-Atienza, V. M.; Tago, J.; Sanabria-Gómez, J. D.; Chaljub, E.; Etienne, V.; Virieux, J.; Quintanar, L.
2016-12-01
Built-up on top of ancient lake deposits, Mexico City experiences some of the largest seismic site effects worldwide. Besides the extreme amplification of seismic waves, duration of intense ground motion from large subduction earthquakes exceeds three minutes in the lake-bed zone of the basin, where hundreds of buildings collapsed or were seriously damaged during the magnitude 8.0 Michoacán earthquake in 1985. Different mechanisms contribute to the long lasting motions, such as the regional dispersion and multiple-scattering of the incoming wavefield from the coast, more than 300 km away the city. By means of high performance computational modeling we show that, despite the highly dissipative basin deposits, seismic energy can propagate long distances in the deep structure of the valley, promoting also a large elongation of motion. Our simulations reveal that the seismic response of the basin is dominated by surface-waves overtones, and that this mechanism increases the duration of ground motion by more than 170% and 290% of the incoming wavefield duration at 0.5 and 0.3 Hz, respectively, which are two frequencies with the largest observed amplification. This conclusion contradicts what has been previously stated from observational and modeling investigations, where the basin itself has been discarded as a preponderant factor promoting long and devastating shaking in Mexico City.
Long Duration of Ground Motion in the Paradigmatic Valley of Mexico.
Cruz-Atienza, V M; Tago, J; Sanabria-Gómez, J D; Chaljub, E; Etienne, V; Virieux, J; Quintanar, L
2016-12-09
Built-up on top of ancient lake deposits, Mexico City experiences some of the largest seismic site effects worldwide. Besides the extreme amplification of seismic waves, duration of intense ground motion from large subduction earthquakes exceeds three minutes in the lake-bed zone of the basin, where hundreds of buildings collapsed or were seriously damaged during the magnitude 8.0 Michoacán earthquake in 1985. Different mechanisms contribute to the long lasting motions, such as the regional dispersion and multiple-scattering of the incoming wavefield from the coast, more than 300 km away the city. By means of high performance computational modeling we show that, despite the highly dissipative basin deposits, seismic energy can propagate long distances in the deep structure of the valley, promoting also a large elongation of motion. Our simulations reveal that the seismic response of the basin is dominated by surface-waves overtones, and that this mechanism increases the duration of ground motion by more than 170% and 290% of the incoming wavefield duration at 0.5 and 0.3 Hz, respectively, which are two frequencies with the largest observed amplification. This conclusion contradicts what has been previously stated from observational and modeling investigations, where the basin itself has been discarded as a preponderant factor promoting long and devastating shaking in Mexico City.
Acoustic/seismic signal propagation and sensor performance modeling
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Marlin, David H.; Mackay, Sean
2007-04-01
Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).
Observing Drought-Induced Groundwater Depletion in California with Seismic Noise
NASA Astrophysics Data System (ADS)
Clements, T.; Denolle, M.
2017-12-01
While heavy rainfall replenished reservoirs and snowpack recovered in winter 2016/2017, groundwater levels across much of California are still at or near all-time lows following one of the worst droughts in the state's history. Groundwater depletion in California has been studied extensively using GPS, InSAR, and GRACE. Here, we propose to monitor groundwater levels across California through measuring the temporal variation in seismic velocity (dv/v) at a regional scale. In the last decade, dv/v has emerged as a technique to investigate near surface and surficial processes such as landslides, volcanic eruptions, and earthquakes. Toward predicting groundwater levels through real-time monitoring with seismic noise, we investigate the relations between the dv/v time series and observed groundwater levels. 12 years (Jan 2006 - July 2017) of noise cross-correlation functions (CCF) are computed from continuous vertical component seismic data recorded at 100+ sites across California. Velocity changes (dv/v) are obtained by inverting all daily CCFs to produce a dv/v time series for each station pair. Our preliminary results show a seasonal variation in dv/v along with a gradual increase in dv/v throughout the drought. We interpret the increase in dv/v as a response to declining groundwater levels.
A neural network based methodology to predict site-specific spectral acceleration values
NASA Astrophysics Data System (ADS)
Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.
2010-12-01
A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.
Modelling strong seismic ground motion: three-dimensional loading path versus wavefield polarization
NASA Astrophysics Data System (ADS)
Santisi d'Avila, Maria Paola; Lenti, Luca; Semblat, Jean-François
2012-09-01
Seismic waves due to strong earthquakes propagating in surficial soil layers may both reduce soil stiffness and increase the energy dissipation into the soil. To investigate seismic wave amplification in such cases, past studies have been devoted to one-directional shear wave propagation in a soil column (1D-propagation) considering one motion component only (1C-polarization). Three independent purely 1C computations may be performed ('1D-1C' approach) and directly superimposed in the case of weak motions (linear behaviour). This research aims at studying local site effects by considering seismic wave propagation in a 1-D soil profile accounting for the influence of the 3-D loading path and non-linear hysteretic behaviour of the soil. In the proposed '1D-3C' approach, the three components (3C-polarization) of the incident wave are simultaneously propagated into a horizontal multilayered soil. A 3-D non-linear constitutive relation for the soil is implemented in the framework of the Finite Element Method in the time domain. The complex rheology of soils is modelled by mean of a multisurface cyclic plasticity model of the Masing-Prandtl-Ishlinskii-Iwan type. The great advantage of this choice is that the only data needed to describe the model is the modulus reduction curve. A parametric study is carried out to characterize the changes in the seismic motion of the surficial layers due to both incident wavefield properties and soil non-linearities. The numerical simulations show a seismic response depending on several parameters such as polarization of seismic waves, material elastic and dynamic properties, as well as on the impedance contrast between layers and frequency content and oscillatory character of the input motion. The 3-D loading path due to the 3C-polarization leads to multi-axial stress interaction that reduces soil strength and increases non-linear effects. The non-linear behaviour of the soil may have beneficial or detrimental effects on the seismic response at the free surface, depending on the energy dissipation rate. Free surface time histories, stress-strain hysteresis loops and in-depth profiles of octahedral stress and strain are estimated for each soil column. The combination of three separate 1D-1C non-linear analyses is compared to the proposed 1D-3C approach, evidencing the influence of the 3C-polarization and the 3-D loading path on strong seismic motions.
Seismic modeling of Earth's 3D structure: Recent advancements
NASA Astrophysics Data System (ADS)
Ritsema, J.
2008-12-01
Global models of Earth's seismic structure continue to improve due to the growth of seismic data sets, implementation of advanced wave propagations theories, and increased computational power. In my presentation, I will summarize seismic tomography results from the past 5-10 years. I will compare the most recent P and S velocity models, discuss model resolution and model interpretation, and present an, admittedly biased, list of research directions required to develop the next generation 3D models.
Seismic expression of Red Fork channels in Major and Kay Counties, Oklahoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanoch, C.A.
1987-08-01
This paper investigates the application of regional seismic to exploration and development Red Fork sands of the Cherokee Group, in Major and Kay Counties, Oklahoma. A computer-aided exploration system (CAEX) was used to justify the subtle seismic expressions with the geological interpretation. Modeling shows that the low-velocity shales are the anomalous rock in the Cherokee package, which is most represented by siltstone and thin sands. Because the Red Fork channel sands were incised into or deposited with laterally time-equivalent siltstones, no strong reflection coefficient is associated with the top of the sands. The objective sands become a seismic anomaly onlymore » when they cut into and replace a low-velocity shale. This knowledge allows mapping the channel thickness by interpreting the shale thickness from seismic data. A group shoot line in Major County, Oklahoma, has been tied to the geologic control, and the channel thicknesses have been interpreted assuming a detectable vertical resolution of 10 ft. A personal computer-based geophysical work station is used to construct velocity logs representative of the geology to produce forward-modeled synthetic seismic sections, and to display, in color, the seismic trace attributes. These synthetic sections are used as tools to compare with and interpret the seismic line and to evaluate the interpretative value of lowest cost, lesser quality data versus reprocessing or new data acquisition.« less
NASA Astrophysics Data System (ADS)
Hsieh, M.; Zhao, L.; Ma, K.
2010-12-01
Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-31
... Staff Guidance on Ensuring Hazard-Consistent Seismic Input for Site Response and Soil Structure...-Consistent Seismic Input for Site Response and Soil Structure Interaction Analyses,'' (Agencywide Documents... Soil Structure Interaction Analyses,'' (ADAMS Accession No. ML092230455) to solicit public and industry...
75 FR 36715 - Advisory Committee on Reactor Safeguards; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
... Seismic Input for Site Response and Soil Structure Interaction Analyses'' (Open)--The Committee will hold... Seismic Input for Site Response and Soil Structure Interaction Analyses.'' 9:30 a.m.-10:30 a.m.: Interim Staff Guidance (ISG) DC/COL-ISG-020, ``Implementation of Seismic Margin Analysis for New Reactors Based...
Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides
NASA Astrophysics Data System (ADS)
Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.
2016-12-01
Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.
NASA Astrophysics Data System (ADS)
Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino
2017-04-01
The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.
Are seismic hazard assessment errors and earthquake surprises unavoidable?
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir
2013-04-01
Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a demonstrated and sufficient justification of hazard assessment protocols; (b) a more complete learning of the actual range of earthquake hazards to local communities and populations, and (c) a more ethically responsible control over how seismic hazard and seismic risk is implemented to protect public safety. It follows that the international project GEM is on the wrong track, if it continues to base seismic risk estimates on the standard method to assess seismic hazard. The situation is not hopeless and could be improved dramatically due to available geological, geomorphologic, seismic, and tectonic evidences and data combined with deterministic pattern recognition methodologies, specifically, when intending to PREDICT PREDICTABLE, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades. It proves that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such info in advance extreme catastrophes, which are LOW PROBABILITY EVENTS THAT HAPPEN WITH CERTAINTY. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve to optimistic challenging issues of neo-deterministic Hazard Predictability.
A first step to compare geodynamical models and seismic observations of the inner core
NASA Astrophysics Data System (ADS)
Lasbleis, M.; Waszek, L.; Day, E. A.
2016-12-01
Seismic observations have revealed a complex inner core, with lateral and radial heterogeneities at all observable scales. The dominant feature is the east-west hemispherical dichotomy in seismic velocity and attenuation. Several geodynamical models have been proposed to explain the observed structure: convective instabilities, external forces, crystallisation processes or influence of outer core convection. However, interpreting such geodynamical models in terms of the seismic observations is difficult, and has been performed only for very specific models (Geballe 2013, Lincot 2014, 2016). Here, we propose a common framework to make such comparisons. We have developed a Python code that propagates seismic ray paths through kinematic geodynamical models for the inner core, computing a synthetic seismic data set that can be compared to seismic observations. Following the method of Geballe 2013, we start with the simple model of translation. For this, the seismic velocity is proposed to be function of the age or initial growth rate of the material (since there is no deformation included in our models); the assumption is reasonable when considering translation, growth and super rotation of the inner core. Using both artificial (random) seismic ray data sets and a real inner core data set (from Waszek et al. 2011), we compare these different models. Our goal is to determine the model which best matches the seismic observations. Preliminary results show that super rotation successfully creates an eastward shift in properties with depth, as has been observed seismically. Neither the growth rate of inner core material nor the relationship between crystal size and seismic velocity are well constrained. Consequently our method does not directly compute the seismic travel times. Instead, here we use age, growth rate and other parameters as proxies for the seismic properties, which represent a good first step to compare geodynamical and seismic observations.Ultimately we aim to release our codes to broader scientific community, allowing researchers from all disciplines to test their models of inner core growth against seismic observations or create a kinematic model for the evolution of the inner core which matches new geophysical observations.
NASA Astrophysics Data System (ADS)
Murru, M.; Falcone, G.; Taroni, M.; Console, R.
2017-12-01
In 2015 the Italian Department of Civil Protection, started a project for upgrading the official Italian seismic hazard map (MPS04) inviting the Italian scientific community to participate in a joint effort for its realization. We participated providing spatially variable time-independent (Poisson) long-term annual occurrence rates of seismic events on the entire Italian territory, considering cells of 0.1°x0.1° from M4.5 up to M8.1 for magnitude bin of 0.1 units. Our final model was composed by two different models, merged in one ensemble model, each one with the same weight: the first one was realized by a smoothed seismicity approach, the second one using the seismogenic faults. The spatial smoothed seismicity was obtained using the smoothing method introduced by Frankel (1995) applied to the historical and instrumental seismicity. In this approach we adopted a tapered Gutenberg-Richter relation with a b-value fixed to 1 and a corner magnitude estimated with the bigger events in the catalogs. For each seismogenic fault provided by the Database of the Individual Seismogenic Sources (DISS), we computed the annual rate (for each cells of 0.1°x0.1°) for magnitude bin of 0.1 units, assuming that the seismic moments of the earthquakes generated by each fault are distributed according to the same tapered Gutenberg-Richter relation of the smoothed seismicity model. The annual rate for the final model was determined in the following way: if the cell falls within one of the seismic sources, we merge the respective value of rate determined by the seismic moments of the earthquakes generated by each fault and the value of the smoothed seismicity model with the same weight; if instead the cells fall outside of any seismic source we considered the rate obtained from the spatial smoothed seismicity. Here we present the final results of our study to be used for the new Italian seismic hazard map.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, J.J.
Some of the same space-age signal technology being used to track events 200 miles above the earth is helping petroleum explorationists track down oil and natural gas two miles and more down into the earth. The breakthroughs, which have come in a technique called three-dimensional seismic work, could change the complexion of exploration for oil and natural gas. Thanks to this 3-D seismic approach, explorationists can make dynamic maps of sites miles beneath the surface. Then explorationists can throw these maps on space-age computer systems and manipulate them every which way - homing in sharply on salt domes, faults, sandsmore » and traps associated with oil and natural gas. ''The 3-D seismic scene has exploded within the last two years,'' says, Peiter Tackenberg, Marathon technical consultant who deals with both domestic and international exploration. The 3-D technique has been around for more than a decade, he notes, but recent achievements in space-age computer hardware and software have unlocked its full potential.« less
Advanced computational tools for 3-D seismic analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, J.; Glover, C.W.; Protopopescu, V.A.
1996-06-01
The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advancemore » in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.« less
Intelligent seismic risk mitigation system on structure building
NASA Astrophysics Data System (ADS)
Suryanita, R.; Maizir, H.; Yuniorto, E.; Jingga, H.
2018-01-01
Indonesia located on the Pacific Ring of Fire, is one of the highest-risk seismic zone in the world. The strong ground motion might cause catastrophic collapse of the building which leads to casualties and property damages. Therefore, it is imperative to properly design the structural response of building against seismic hazard. Seismic-resistant building design process requires structural analysis to be performed to obtain the necessary building responses. However, the structural analysis could be very difficult and time consuming. This study aims to predict the structural response includes displacement, velocity, and acceleration of multi-storey building with the fixed floor plan using Artificial Neural Network (ANN) method based on the 2010 Indonesian seismic hazard map. By varying the building height, soil condition, and seismic location in 47 cities in Indonesia, 6345 data sets were obtained and fed into the ANN model for the learning process. The trained ANN can predict the displacement, velocity, and acceleration responses with up to 96% of predicted rate. The trained ANN architecture and weight factors were later used to build a simple tool in Visual Basic program which possesses the features for prediction of structural response as mentioned previously.
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation
NASA Astrophysics Data System (ADS)
Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco
2017-11-01
Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.
Earthquake Monitoring: SeisComp3 at the Swiss National Seismic Network
NASA Astrophysics Data System (ADS)
Clinton, J. F.; Diehl, T.; Cauzzi, C.; Kaestli, P.
2011-12-01
The Swiss Seismological Service (SED) has an ongoing responsibility to improve the seismicity monitoring capability for Switzerland. This is a crucial issue for a country with low background seismicity but where a large M6+ earthquake is expected in the next decades. With over 30 stations with spacing of ~25km, the SED operates one of the densest broadband networks in the world, which is complimented by ~ 50 realtime strong motion stations. The strong motion network is expected to grow with an additional ~80 stations over the next few years. Furthermore, the backbone of the network is complemented by broadband data from surrounding countries and temporary sub-networks for local monitoring of microseismicity (e.g. at geothermal sites). The variety of seismic monitoring responsibilities as well as the anticipated densifications of our network demands highly flexible processing software. We are transitioning all software to the SeisComP3 (SC3) framework. SC3 is a fully featured automated real-time earthquake monitoring software developed by GeoForschungZentrum Potsdam in collaboration with commercial partner, gempa GmbH. It is in its core open source, and becoming a community standard software for earthquake detection and waveform processing for regional and global networks across the globe. SC3 was originally developed for regional and global rapid monitoring of potentially tsunamagenic earthquakes. In order to fulfill the requirements of a local network recording moderate seismicity, SED has tuned configurations and added several modules. In this contribution, we present our SC3 implementation strategy, focusing on the detection and identification of seismicity on different scales. We operate several parallel processing "pipelines" to detect and locate local, regional and global seismicity. Additional pipelines with lower detection thresholds can be defined to monitor seismicity within dense subnets of the network. To be consistent with existing processing procedures, the nonlinloc algorithm was implemented for manual and automatic locations using 1D and 3D velocity models; plugins for improved automatic phase picking and Ml computation were developed; and the graphical user interface for manual review was extended (including pick uncertainty definition; first motion focal mechanisms; interactive review of station magnitude waveforms; full inclusion of strong motion data). SC3 locations are fully compatible with those derived from the existing in-house processing tools and are stored in a database derived from the QuakeML data model. The database is shared with the SED alerting software, which merges origins from both SC3 and external sources in realtime and handles the alerting procedure. With the monitoring software being transitioned to SeisComp3, acquisition, archival and dissemination of SED waveform data now conforms to the seedlink and ArcLink protocols and continuous archives can be accessed via SED and all EIDA (European Integrated Data Archives) web-sites. Further, a SC3 module for waveform parameterisation has been developed, allowing rapid computation of peak values of ground motion and other engineering parameters within minutes of a new event. An output of this module is USGS ShakeMap XML. n minutes of a new event. An output of this module is USGS ShakeMap XML.
Seismic risk assessment of architectural heritages in Gyeongju considering local site effects
NASA Astrophysics Data System (ADS)
Park, H.-J.; Kim, D.-S.; Kim, D.-M.
2013-02-01
A seismic risk assessment is conducted for cultural heritage sites in Gyeongju, the capital of Korea's ancient Silla Kingdom. Gyeongju, home to UNESCO World Heritage sites, contains remarkable artifacts of Korean Buddhist art. An extensive geotechnical survey including a series of in situ tests is presented, providing pertinent soil profiles for site response analyses on thirty cultural heritage sites. After the shear wave velocity profiles and dynamic material properties were obtained, site response analyses were carried out at each historical site and the amplification characteristics, site period, and response spectrum of the site were determined for the earthquake levels of 2400 yr and 1000 yr return periods based on the Korean seismic hazard map. Response spectrum and corresponding site coefficients obtained from site response analyses considering geologic conditions differ significantly from the current Korean seismic code. This study confirms the importance of site-specific ground response analyses considering local geological conditions. Results are given in the form of the spatial distribution of bedrock depth, site period, and site amplification coefficients, which are particularly valuable in the context of a seismic vulnerability study. This study presents the potential amplification of hazard maps and provides primary data on the seismic risk assessment of each cultural heritage.
Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.
2014-05-01
The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily performed to understand the influence of the model characteristics on the computed ground shaking scenarios. For massive parametric tests, or for the repeated generation of large scale hazard maps, the methodology can take advantage of more advanced computational platforms, ranging from GRID computing infrastructures to HPC dedicated clusters up to Cloud computing. In such a way, scientists can deal efficiently with the variety and complexity of the potential earthquake sources, and perform parametric studies to characterize the related uncertainties. NDSHA provides realistic time series of expected ground motion readily applicable for seismic engineering analysis and other mitigation actions. The methodology has been successfully applied to strategic buildings, lifelines and cultural heritage sites, and for the purpose of seismic microzoning in several urban areas worldwide. A web application is currently being developed that facilitates the access to the NDSHA methodology and the related outputs by end-users, who are interested in reliable territorial planning and in the design and construction of buildings and infrastructures in seismic areas. At the same, the web application is also shaping up as an advanced educational tool to explore interactively how seismic waves are generated at the source, propagate inside structural models, and build up ground shaking scenarios. We illustrate the preliminary results obtained from a multiscale application of NDSHA approach to the territory of India, zooming from large scale hazard maps of ground shaking at bedrock, to the definition of local scale earthquake scenarios for selected sites in the Gujarat state (NW India). The study aims to provide the community (e.g. authorities and engineers) with advanced information for earthquake risk mitigation, which is particularly relevant to Gujarat in view of the rapid development and urbanization of the region.
American Marten Respond to Seismic Lines in Northern Canada at Two Spatial Scales
Tigner, Jesse; Bayne, Erin M.; Boutin, Stan
2015-01-01
Development of hydrocarbon resources across northwest Canada has spurred economic prosperity and generated concerns over impacts to biodiversity. To balance these interests, numerous jurisdictions have adopted management thresholds that allow for limited energy development but minimize undesirable impacts to wildlife. Used for exploration, seismic lines are the most abundant linear feature in the boreal forest and exist at a variety of widths and recovery states. We used American marten (Martes americana) as a model species to measure how line attributes influence species’ response to seismic lines, and asked whether responses to individual lines trigger population impacts. Marten response to seismic lines was strongly influenced by line width and recovery state. Compared to forest interiors, marten used open seismic lines ≥ 3 m wide less often, but used open lines ≤ 2 m wide and partially recovered lines ≥ 6 m wide similarly. Marten response to individual line types appeared to trigger population impacts. The probability of occurrence at the home range scale declined with increasing seismic line density, and the inclusion of behavioral response to line density calculations improved model fit. In our top performing model, we excluded seismic lines ≤ 2 m from our calculation of line density, and the probability of occurrence declined > 80% between home ranges with the lowest and highest line densities. Models that excluded seismic lines did not strongly explain occurrence. We show how wildlife-derived metrics can inform regulatory guidelines to increase the likelihood those guidelines meet intended management objectives. With respect to marten, not all seismic lines constitute disturbances, but avoidance of certain line types scales to population impacts. This approach provides the ecological context required to understand cause and effect relationships among socio-economic and ecological conservation goals. PMID:25768848
NASA Astrophysics Data System (ADS)
Shiuly, Amit; Sahu, R. B.; Mandal, Saroj
2017-06-01
This paper presents site specific seismic hazard analysis of Kolkata city, former capital of India and present capital of state West Bengal, situated on the world’s largest delta island, Bengal basin. For this purpose, peak ground acceleration (PGA) for a maximum considered earthquake (MCE) at bedrock level has been estimated using an artificial neural network (ANN) based attenuation relationship developed on the basis of synthetic ground motion data for the region. Using the PGA corresponding to the MCE, a spectrum compatible acceleration time history at bedrock level has been generated by using a wavelet based computer program, WAVEGEN. This spectrum compatible time history at bedrock level has been converted to the same at surface level using SHAKE2000 for 144 borehole locations in the study region. Using the predicted values of PGA and PGV at the surface, corresponding contours for the region have been drawn. For the MCE, the PGA at bedrock level of Kolkata city has been obtained as 0.184 g, while that at the surface level varies from 0.22 g to 0.37 g. Finally, Kolkata has been subdivided into eight seismic subzones, and for each subzone a response spectrum equation has been derived using polynomial regression analysis. This will be very helpful for structural and geotechnical engineers to design safe and economical earthquake resistant structures.
Array seismological investigation of the South Atlantic 'Superplume'
NASA Astrophysics Data System (ADS)
Hempel, Stefanie; Gassmöller, Rene; Thomas, Christine
2015-04-01
We apply the axisymmetric, spherical Earth spectral elements code AxiSEM to model seismic compressional waves which sample complex `superplume' structures in the lower mantle. High-resolution array seismological stacking techniques are evaluated regarding their capability to resolve large-scale high-density low-velocity bodies including interior structure such as inner upwellings, high density lenses, ultra-low velocity zones (ULVZs), neighboring remnant slabs and adjacent small-scale uprisings. Synthetic seismograms are also computed and processed for models of the Earth resulting from geodynamic modelling of the South Atlantic mantle including plate reconstruction. We discuss the interference and suppression of the resulting seismic signals and implications for a seismic data study in terms of visibility of the South Atlantic `superplume' structure. This knowledge is used to process, invert and interpret our data set of seismic sources from the Andes and the South Sandwich Islands detected at seismic arrays spanning from Ethiopia over Cameroon to South Africa mapping the South Atlantic `superplume' structure including its interior structure. In order too present the model of the South Atlantic `superplume' structure that best fits the seismic data set, we iteratively compute synthetic seismograms while adjusting the model according to the dependencies found in the parameter study.
A Response Function Approach for Rapid Far-Field Tsunami Forecasting
NASA Astrophysics Data System (ADS)
Tolkova, Elena; Nicolsky, Dmitry; Wang, Dailin
2017-08-01
Predicting tsunami impacts at remote coasts largely relies on tsunami en-route measurements in an open ocean. In this work, these measurements are used to generate instant tsunami predictions in deep water and near the coast. The predictions are generated as a response or a combination of responses to one or more tsunameters, with each response obtained as a convolution of real-time tsunameter measurements and a pre-computed pulse response function (PRF). Practical implementation of this method requires tables of PRFs in a 3D parameter space: earthquake location-tsunameter-forecasted site. Examples of hindcasting the 2010 Chilean and the 2011 Tohoku-Oki tsunamis along the US West Coast and beyond demonstrated high accuracy of the suggested technology in application to trans-Pacific seismically generated tsunamis.
Hydraulic fracturing volume is associated with induced earthquake productivity in the Duvernay play
NASA Astrophysics Data System (ADS)
Schultz, R.; Atkinson, G.; Eaton, D. W.; Gu, Y. J.; Kao, H.
2018-01-01
A sharp increase in the frequency of earthquakes near Fox Creek, Alberta, began in December 2013 in response to hydraulic fracturing. Using a hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We show that induced earthquakes are associated with completions that used larger injection volumes (104 to 105 cubic meters) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have an insignificant association with seismic response. Further findings suggest that geological factors play a prominent role in seismic productivity, as evidenced by spatial correlations. Together, volume and geological factors account for ~96% of the variability in the induced earthquake rate near Fox Creek. This result is quantified by a seismogenic index–modified frequency-magnitude distribution, providing a framework to forecast induced seismicity.
NASA Astrophysics Data System (ADS)
Ishizawa, O. A.; Clouteau, D.
2007-12-01
Long-duration, amplifications and spatial response's variability of the seismic records registered in Mexico City during the September 1985 earthquake cannot only be explained by the soil velocity model. We will try to explain these phenomena by studying the extent of the effect of buildings' diffracted wave fields during an earthquake. The main question is whether the presence of a large number of buildings can significantly modify the seismic wave field. We are interested in the interaction between the incident wave field propagating in a stratified half- space and a large number of structures at the free surface, i.e., the coupled city-site effect. We study and characterize the seismic wave propagation regimes in a city using the theory of wave propagation in random media. In the coupled city-site system, the buildings are modeled as resonant scatterers uniformly distributed at the surface of a deterministic, horizontally layered elastic half-space representing the soil. Based on the mean-field and the field correlation equations, we build a theoretical model which takes into account the multiple scattering of seismic waves and allows us to describe the coupled city-site system behavior in a simple and rapid way. The results obtained for the configurationally averaged field quantities are validated by means of 3D results for the seismic response of a deterministic model. The numerical simulations of this model are computed with MISS3D code based on classical Soil-Structure Interaction techniques and on a variational coupling between Boundary Integral Equations for a layered soil and a modal Finite Element approach for the buildings. This work proposes a detailed numerical and a theoretical analysis of the city-site interaction (CSI) in Mexico City area. The principal parameters in the study of the CSI are the buildings resonant frequency distribution, the soil characteristics of the site, the urban density and position of the buildings in the city, as well as the type of incident wave. The main results of the theoretical and numerical models allow us to characterize the seismic movement in urban areas.
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.
2017-12-01
Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
NASA Astrophysics Data System (ADS)
Setiawan, Jody; Nakazawa, Shoji
2017-10-01
This paper discusses about comparison of seismic response behaviors, seismic performance and seismic loss function of a conventional special moment frame steel structure (SMF) and a special moment frame steel structure with base isolation (BI-SMF). The validation of the proposed simplified estimation method of the maximum deformation of the base isolation system by using the equivalent linearization method and the validation of the design shear force of the superstructure are investigated from results of the nonlinear dynamic response analysis. In recent years, the constructions of steel office buildings with seismic isolation system are proceeding even in Indonesia where the risk of earthquakes is high. Although the design code for the seismic isolation structure has been proposed, there is no actual construction example for special moment frame steel structure with base isolation. Therefore, in this research, the SMF and BI-SMF buildings are designed by Indonesian Building Code which are assumed to be built at Padang City in Indonesia. The material of base isolation system is high damping rubber bearing. Dynamic eigenvalue analysis and nonlinear dynamic response analysis are carried out to show the dynamic characteristics and seismic performance. In addition, the seismic loss function is obtained from damage state probability and repair cost. For the response analysis, simulated ground accelerations, which have the phases of recorded seismic waves (El Centro NS, El Centro EW, Kobe NS and Kobe EW), adapted to the response spectrum prescribed by the Indonesian design code, that has, are used.
NASA Astrophysics Data System (ADS)
Nayak, Avinash; Dreger, Douglas S.
2018-05-01
The formation of a large sinkhole at the Napoleonville salt dome (NSD), Assumption Parish, Louisiana, caused by the collapse of a brine cavern, was accompanied by an intense and complex sequence of seismic events. We implement a grid-search approach to compute centroid locations and point-source moment tensor (MT) solutions of these seismic events using ˜0.1-0.3 Hz displacement waveforms and synthetic Green's functions computed using a 3D velocity model of the western edge of the NSD. The 3D model incorporates the currently known approximate geometry of the salt dome and the overlying anhydrite-gypsum cap rock, and features a large velocity contrast between the high velocity salt dome and low velocity sediments overlying and surrounding it. For each possible location on the source grid, Green's functions (GFs) to each station were computed using source-receiver reciprocity and the finite-difference seismic wave propagation software SW4. We also establish an empirical method to rigorously assess uncertainties in the centroid location, MW and source type of these events under evolving network geometry, using the results of synthetic tests with hypothetical events and real seismic noise. We apply the methods on the entire duration of data (˜6 months) recorded by the temporary US Geological Survey network. During an energetic phase of the sequence from 24-31 July 2012 when 4 stations were operational, the events with the best waveform fits are primarily located at the western edge of the salt dome at most probable depths of ˜0.3-0.85 km, close to the horizontal positions of the cavern and the future sinkhole. The data are fit nearly equally well by opening crack MTs in the high velocity salt medium or by isotropic volume-increase MTs in the low velocity sediment layers. We find that data recorded by 6 stations during 1-2 August 2012, right before the appearance of the sinkhole, indicate that some events are likely located in the lower velocity media just outside the salt dome at slightly shallower depth ˜0.35-0.65 km, with preferred isotropic volume-increase MT solutions. We find that GFs computed using the 3D velocity model generally result in better fits to the data than GFs computed using 1D velocity models, especially for the smaller amplitude tangential and vertical components, and result in better resolution of event locations. The dominant seismicity during 24-30 July 2012 is characterized by steady occurrence of seismic events with similar locations and MT solutions at a near-characteristic inter-event time. The steady activity is sometimes interrupted by tremor-like sequences of multiple events in rapid succession, followed by quiet periods of little of no seismic activity, in turn followed by the resumption of seismicity with a reduced seismic moment-release rate. The dominant volume-increase MT solutions and the steady features of the seismicity indicate a crack-valve-type source mechanism possibly driven by pressurized natural gas.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
Global Seismic Monitoring: Past, Present, and Future
NASA Astrophysics Data System (ADS)
Zoback, M.; Benz, H.; Oppenheimer, D.
2007-12-01
Global seismological observations began in April 1889 when an earthquake in Tokyo, Japan was accurately recorded in Germany on two different horizontal pendulum instruments. However, modern global observational seismology really began 46 years ago when the 120-station World Wide Standard Seismograph Network was installed by the US to monitor underground nuclear tests and earthquakes using well-calibrated short- and long- period stations. At the same time rapid advances in computing technology enabled researchers to begin sophisticated analysis of the increasing amount of seismic data, which led to better understanding of earthquake source properties and their use in establishing plate tectonics. Today, global seismic networks are operated by German (Geophon), France (Geoscope), the United States (Global Seismograph Network) and the International Monitoring System. Presently, the Federation of Digital Seismograph Networks registers more than 1,000 broadband stations world-wide, a small percentage of the total number of digital seismic stations around the world. Following the devastating Kobe, Japan and Northridge, California earthquakes, Japan and the US have led the world in the integration of existing seismic sensor systems (weak and strong motion) into development of near-real-time, post-earthquake response products like ShakeMap, detailing the spatial distribution of strong shaking. Future challenges include expanding real-time integration of both seismic and geodetic sensor systems to produce early warning of strong shaking, rapid source determination, as well as near-realtime post- earthquake damage assessment. Seismic network data, hydro-acoustic arrays, deep water tide gauges, and satellite imagery of wave propagation should be integrated in real-time to provide input for hydrodynamic modeling yielding the distribution, timing and size of tsunamis runup--which would then be available instantly on the web, e.g. in a Google Earth format. Dense arrays of strong motion sensors together with deployment of MEMS-type accelerometers in buildings and equipment routinely connected to the Web could potentially provide thousands of measurements of damaging strong ground motion. This technology could ultimately become part of smart building design enabling critical facilities to change their structural response to imminent strong shaking. Looking further forward, it is likely that a continuously observing spaceborne system could image the occurrence of "silent" or "slow" earthquakes as well as the propagation of ground displacement by surface waves at scales of continents.
Seismic hazard, risk, and design for South America
Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison
2018-01-01
We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best available science.
Seismic hazard assessment for Guam and the Northern Mariana Islands
Mueller, Charles S.; Haller, Kathleen M.; Luco, Nicholas; Petersen, Mark D.; Frankel, Arthur D.
2012-01-01
We present the results of a new probabilistic seismic hazard assessment for Guam and the Northern Mariana Islands. The Mariana island arc has formed in response to northwestward subduction of the Pacific plate beneath the Philippine Sea plate, and this process controls seismic activity in the region. Historical seismicity, the Mariana megathrust, and two crustal faults on Guam were modeled as seismic sources, and ground motions were estimated by using published relations for a firm-rock site condition. Maps of peak ground acceleration, 0.2-second spectral acceleration for 5 percent critical damping, and 1.0-second spectral acceleration for 5 percent critical damping were computed for exceedance probabilities of 2 percent and 10 percent in 50 years. For 2 percent probability of exceedance in 50 years, probabilistic peak ground acceleration is 0.94 gravitational acceleration at Guam and 0.57 gravitational acceleration at Saipan, 0.2-second spectral acceleration is 2.86 gravitational acceleration at Guam and 1.75 gravitational acceleration at Saipan, and 1.0-second spectral acceleration is 0.61 gravitational acceleration at Guam and 0.37 gravitational acceleration at Saipan. For 10 percent probability of exceedance in 50 years, probabilistic peak ground acceleration is 0.49 gravitational acceleration at Guam and 0.29 gravitational acceleration at Saipan, 0.2-second spectral acceleration is 1.43 gravitational acceleration at Guam and 0.83 gravitational acceleration at Saipan, and 1.0-second spectral acceleration is 0.30 gravitational acceleration at Guam and 0.18 gravitational acceleration at Saipan. The dominant hazard source at the islands is upper Benioff-zone seismicity (depth 40–160 kilometers). The large probabilistic ground motions reflect the strong concentrations of this activity below the arc, especially near Guam.
Code for Calculating Regional Seismic Travel Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN
The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less
The Community Seismic Network: Enabling Observations Through Citizen Science Participation
NASA Astrophysics Data System (ADS)
Kohler, M. D.; Clayton, R. W.; Heaton, T. H.; Bunn, J.; Guy, R.; Massari, A.; Chandy, K. M.
2017-12-01
The Community Seismic Network is a dense accelerometer array deployed in the greater Los Angeles area and represents the future of densely instrumented urban cities where localized vibration measurements are collected continuously throughout the free-field and built environment. The hardware takes advantage of developments in the semiconductor industry in the form of inexpensive MEMS accelerometers that are each coupled with a single board computer. The data processing and archival architecture borrows from developments in cloud computing and network connectedness. The ability to deploy densely in the free field and in upper stories of mid/high-rise buildings is enabled by community hosts for sensor locations. To this end, CSN has partnered with the Los Angeles Unified School District (LAUSD), the NASA-Jet Propulsion Laboratory (JPL), and commercial and civic building owners to host sensors. At these sites, site amplification estimates from RMS noise measurements illustrate the lateral variation in amplification over length scales of 100 m or less, that correlate with gradients in the local geology such as sedimentary basins that abut crystalline rock foothills. This is complemented by high-resolution, shallow seismic velocity models obtained using an H/V method. In addition, noise statistics are used to determine the reliability of sites for ShakeMap and earthquake early warning data. The LAUSD and JPL deployments are examples of how situational awareness and centralized warning products such as ShakeMap and ShakeCast are enabled by citizen science participation. Several buildings have been instrumented with at least one triaxial accelerometer per floor, providing measurements for real-time structural health monitoring through local, customized displays. For real-time and post-event evaluation, the free-field and built environment CSN data and products illustrate the feasibility of order-of-magnitude higher spatial resolution mapping compared to what is currently possible with traditional, regional seismic networks. The JPL experiment in particular represents a miniature prototype for city-wide earthquake monitoring that combines free-field measurements for ground shaking intensities, with mid-rise building response through advanced fragility curve computations.
NASA Astrophysics Data System (ADS)
Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina
2017-11-01
This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M < 6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered in this study, we present a different viewpoint that, in our opinion, is relevant for retrofitting the existing buildings and for driving impending interventions of risk reduction.
NASA Astrophysics Data System (ADS)
La Mura, Cristina; Gholami, Vahid; Panza, Giuliano F.
2013-04-01
In order to enable realistic and reliable earthquake hazard assessment and reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered. The propagation of seismic waves in complex laterally varying 3D layered structures is a complicated process. Analytical solutions of the elastodynamic equations for such types of media are not known. The most common approaches to the formal description of seismic wavefields in such complex structures are methods based on direct numerical solutions of the elastodynamic equations, e.g. finite-difference, finite-element method, and approximate asymptotic methods. In this work, we present an innovative methodology for computing synthetic seismograms, complete of the main direct, refracted, converted phases and surface waves in three-dimensional anelastic models based on the combination of the Modal Summation technique with the Asymptotic Ray Theory in the framework of the WKBJ - approximation. The three - dimensional models are constructed using a set of vertically heterogeneous sections (1D structures) that are juxtaposed on a regular grid. The distribution of these sections in the grid is done in such a way to fulfill the requirement of weak lateral inhomogeneity in order to satisfy the condition of applicability of the WKBJ - approximation, i.e. the lateral gradient of the parameters characterizing the 1D structure has to be small with respect to the prevailing wavelength. The new method has been validated comparing synthetic seismograms with the records available of three different earthquakes in three different regions: Kanto basin (Japan) triggered by the 1990 Odawara earthquake Mw= 5.1, Romanian territory triggered by the 30 May 1990 Vrancea intermediate-depth earthquake Mw= 6.9 and Iranian territory affected by the 26 December 2003 Bam earthquake Mw= 6.6. Besides the advantage of being a useful tool for assessment of seismic hazard and seismic risk reduction, it is characterized by high efficiency, in fact, once the study region is identified and the 3D model is constructed, the computation, at each station, of the three components of the synthetic signal (displacement, velocity, and acceleration) takes less than 3 hours on a 2 GHz CPU.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, A. J.
In our Exascale Computing Project (ECP) we seek to simulate earthquake ground motions at much higher frequency than is currently possible. Previous simulations in the SFBA were limited to 0.5-1 Hz or lower (Aagaard et al. 2008, 2010), while we have recently simulated the response to 5 Hz. In order to improve confidence in simulated ground motions, we must accurately represent the three-dimensional (3D) sub-surface material properties that govern seismic wave propagation over a broad region. We are currently focusing on the San Francisco Bay Area (SFBA) with a Cartesian domain of size 120 x 80 x 35 km, butmore » this area will be expanded to cover a larger domain. Currently, the United States Geologic Survey (USGS) has a 3D model of the SFBA for seismic simulations. However, this model suffers from two serious shortcomings relative to our application: 1) it does not fit most of the available low frequency (< 1 Hz) seismic waveforms from moderate (magnitude M 3.5-5.0) earthquakes; and 2) it is represented with much lower resolution than necessary for the high frequency simulations (> 5 Hz) we seek to perform. The current model will serve as a starting model for full waveform tomography based on 3D sensitivity kernels. This report serves as the deliverable for our ECP FY2017 Quarter 4 milestone to FY 2018 “Computational approach to developing model updates”. We summarize the current state of 3D seismic simulations in the SFBA and demonstrate the performance of the USGS 3D model for a few selected paths. We show the available open-source waveform data sets for model updates, based on moderate earthquakes recorded in the region. We present a plan for improving the 3D model utilizing the available data and further development of our SW4 application. We project how the model could be improved and present options for further improvements focused on the shallow geotechnical layers using dense passive recordings of ambient and human-induced noise.« less
US Geological Survey begins seismic ground response experiments in Washington State
Tarr, A.C.; King, K.W.
1988-01-01
This article briefly describes the experimental monitoring of minor seismic features caused by distant nuclear explosions, mining blasts and rhythmic human pushing against wooden homes. Some means of response prediction are outlined in Washington State and some effects of seismic amplification by weak clayey sediments are described. The results of several experiments are described. -A.Scarth
NASA Astrophysics Data System (ADS)
Matrullo, Emanuela; Lengliné, Olivier; Schmittbuhl, Jean; Karabulut, Hayrullah; Bouchon, Michel
2017-04-01
The Main Marmara Fault (MMF) represents a 150 km un-ruptured segment of the North Anatolian Fault located below the Marmara Sea. It poses a significant hazard for the large cities surrounding the region and in particular for the megalopolis of Istanbul. The seismic activity has been continuously monitored since 2007 by various seismic networks. For this purpose it represents an extraordinary natural laboratory to study in details the whole seismicity bringing insights into the geometry of the faults systems at depth and mechanical properties at various space-time scales. Waveform similarity-based analysis is performed on the continuous recordings to construct a refined catalog of earthquakes from 2009 to 2014. High-resolution relocation was applied using the double-difference algorithm, using cross-correlation differential travel-time data. Seismic moment magnitudes (Mw) have been computed combining the inversion of earthquake S-wave displacement spectra for the larger events and the estimation of the relative size of multiplets using the singular value decomposition (SVD) thanks the highly coherent waveforms. The obtained catalog of seismicity includes more than 15,000 events. The seismicity strongly varying along the strike and depth exhibits a complex structure that confirms the segmentation of the fault with different mechanical behavior (Schmittbuhl et al., GGG, 2016). In the central part of the Marmara Sea, seismicity is poor and scattered. To the east, in the Cinarcick basin, along the MMF, the seismicity is mainly located around 8-15 km in depth, except at both ends of this basin where the seismicity extends vertically up to surface. In the Yalova and Gemlik region (to the east not on the MMF) the seismicity is distributed over a wide range of depth (from surface to 15 km deep) and is characterized by several clusters vertically elongated. The spatio-temporal evolution of earthquake sequences, which repeatedly occur in specific sub-areas, and the seismic moment release reveals mainly typically two kinds of seismicity dynamics: swarm like episodes and mainshock-aftershock sequences. Similar features in the seismicity distribution are observed to the west, in the Tekirdag and Central Basin. These preliminary evidences, combined with the recent analysis on several long-lasting strike-slip seismic repeaters occurring below the Central Basin (Schmittbuhl et al., GRL, 2016) indicate the presence of both locked and creeping portions of the MMF. In the light of the accurate and extensive observations, several open questions emerge: What are the mechanisms responsible of these repeating earthquakes and of the earthquake swarms? What is the influence and the role of fluids in the generation of seismicity.
Tools for educational access to seismic data
NASA Astrophysics Data System (ADS)
Taber, J. J.; Welti, R.; Bravo, T. K.; Hubenthal, M.; Frechette, K.
2017-12-01
Student engagement can be increased both by providing easy access to real data, and by addressing newsworthy events such as recent large earthquakes. IRIS EPO has a suite of access and visualization tools that can be used for such engagement, including a set of three tools that allow students to explore global seismicity, use seismic data to determine Earth structure, and view and analyze near-real-time ground motion data in the classroom. These tools are linked to online lessons that are designed for use in middle school through introductory undergraduate classes. The IRIS Earthquake Browser allows discovery of key aspects of plate tectonics, earthquake locations (in pseudo 3D) and seismicity rates and patterns. IEB quickly displays up to 20,000 seismic events over up to 30 years, making it one of the most responsive, practical ways to visualize historical seismicity in a browser. Maps are bookmarkable and preserve state, meaning IEB map links can be shared or worked into a lesson plan. The Global Seismogram Plotter automatically creates visually clear seismic record sections from selected large earthquakes that are tablet-friendly and can also to be printed for use in a classroom without computers. The plots are designed to be appropriate for use with no parameters to set, but users can also modify the plots, such as including a recording station near a chosen location. A guided exercise is provided where students use the record section to discover the diameter of Earth's outer core. Students can pick and compare phase arrival times onscreen which is key to performing the exercise. A companion station map shows station locations and further information and is linked to the record section. jAmaSeis displays seismic data in real-time from either a local instrument and/or from remote seismic stations that stream data using standard seismic data protocols, and can be used in the classroom or as a public display. Users can filter data, fit a seismogram to travel time curves, triangulate event epicenters on a globe, estimate event magnitudes, and generate images showing seismograms and corresponding calculations. All three tools access seismic databases curated by IRIS Data Services. In addition, jAmaseis also can access data from non-IRIS sources.
NASA Astrophysics Data System (ADS)
Tang, H.; Sun, W.
2016-12-01
The theoretical computation of dislocation theory in a given earth model is necessary in the explanation of observations of the co- and post-seismic deformation of earthquakes. For this purpose, computation theories based on layered or pure half space [Okada, 1985; Okubo, 1992; Wang et al., 2006] and on spherically symmetric earth [Piersanti et al., 1995; Pollitz, 1997; Sabadini & Vermeersen, 1997; Wang, 1999] have been proposed. It is indicated that the compressibility, curvature and the continuous variation of the radial structure of Earth should be simultaneously taken into account for modern high precision displacement-based observations like GPS. Therefore, Tanaka et al. [2006; 2007] computed global displacement and gravity variation by combining the reciprocity theorem (RPT) [Okubo, 1993] and numerical inverse Laplace integration (NIL) instead of the normal mode method [Peltier, 1974]. Without using RPT, we follow the straightforward numerical integration of co-seismic deformation given by Sun et al. [1996] to present a straightforward numerical inverse Laplace integration method (SNIL). This method is used to compute the co- and post-seismic displacement of point dislocations buried in a spherically symmetric, self-gravitating viscoelastic and multilayered earth model and is easy to extended to the application of geoid and gravity. Comparing with pre-existing method, this method is relatively more straightforward and time-saving, mainly because we sum associated Legendre polynomials and dislocation love numbers before using Riemann-Merlin formula to implement SNIL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, R.L.; Gross, D.; Pearson, D.C.
In an attempt to better understand the impact that large mining shots will have on verifying compliance with the international, worldwide, Comprehensive Test Ban Treaty (CTBT, no nuclear explosion tests), a series of seismic and videographic experiments has been conducted during the past two years at the Black Thunder Coal Mine. Personnel from the mine and Los Alamos National Laboratory have cooperated closely to design and perform experiments to produce results with mutual benefit to both organizations. This paper summarizes the activities, highlighting the unique results of each. Topics which were covered in these experiments include: (1) synthesis of seismic,more » videographic, acoustic, and computer modeling data to improve understanding of shot performance and phenomenology; (2) development of computer generated visualizations of observed blasting techniques; (3) documentation of azimuthal variations in radiation of seismic energy from overburden casting shots; (4) identification of, as yet unexplained, out of sequence, simultaneous detonation in some shots using seismic and videographic techniques; (5) comparison of local (0.1 to 15 kilometer range) and regional (100 to 2,000 kilometer range) seismic measurements leading to determine of the relationship between local and regional seismic amplitude to explosive yield for overburden cast, coal bulking and single fired explosions; and (6) determination of the types of mining shots triggering the prototype International Monitoring System for the CTBT.« less
NASA Astrophysics Data System (ADS)
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
Development of a Web Based Simulating System for Earthquake Modeling on the Grid
NASA Astrophysics Data System (ADS)
Seber, D.; Youn, C.; Kaiser, T.
2007-12-01
Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.
Seismic Wavefield Imaging of Long-Period Ground Motion in the Tokyo Metropolitan Area, Japan
NASA Astrophysics Data System (ADS)
Nagao, H.; Kano, M.; Nagata, K.; Ito, S. I.; Sakai, S.; Nakagawa, S.; Hori, M.; Hirata, N.
2017-12-01
Long-period ground motions due to large earthquakes can cause devastating disasters, especially in urbanized areas located on sedimentary basins. To assess and mitigate such damage, it is essential to rapidly evaluate seismic hazards for infrastructures, which can be simulated by seismic response analyses that use waveforms at the base of each infrastructure as an input ground motion. The present study reconstructs the seismic wavefield in the Tokyo metropolitan area located on the Kanto sedimentary basin, Japan, from seismograms of the Metropolitan Seismic Observation network (MeSO-net). The obtained wavefield fully explains the observed waveforms in the frequency band of 0.10-0.20 Hz. This is attributed to the seismic wavefield imaging technique proposed by Kano et al. (2017), which implements the replica exchange Monte Carlo method to simultaneously estimate model parameters related to the subsurface structure and source information. Further investigation shows that the reconstructed seismic wavefield lower than 0.30 Hz is of high quality in terms of variance reduction (VR), which quantifies a misfit in waveforms but that the VR rapidly worsens in higher frequencies. Meanwhile, the velocity response spectra show good agreement with observations up to 0.90 Hz in terms of the combined goodness of fit (CGOF), which is a measure of misfit in the velocity response spectra. Inputting the reconstructed wavefield into seismic response analyses, we can rapidly assess the overall damage to infrastructures immediately after a large earthquake.
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Vallenilla Ferrara, A. M.
2009-04-01
The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.
Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
NASA Astrophysics Data System (ADS)
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-08-01
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.
Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-05-05
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less
NASA Astrophysics Data System (ADS)
Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter
2008-07-01
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.
NASA Astrophysics Data System (ADS)
Nakashima, Yoshito; Komatsubara, Junko
Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.
Long Duration of Ground Motion in the Paradigmatic Valley of Mexico
Cruz-Atienza, V. M.; Tago, J.; Sanabria-Gómez, J. D.; Chaljub, E.; Etienne, V.; Virieux, J.; Quintanar, L.
2016-01-01
Built-up on top of ancient lake deposits, Mexico City experiences some of the largest seismic site effects worldwide. Besides the extreme amplification of seismic waves, duration of intense ground motion from large subduction earthquakes exceeds three minutes in the lake-bed zone of the basin, where hundreds of buildings collapsed or were seriously damaged during the magnitude 8.0 Michoacán earthquake in 1985. Different mechanisms contribute to the long lasting motions, such as the regional dispersion and multiple-scattering of the incoming wavefield from the coast, more than 300 km away the city. By means of high performance computational modeling we show that, despite the highly dissipative basin deposits, seismic energy can propagate long distances in the deep structure of the valley, promoting also a large elongation of motion. Our simulations reveal that the seismic response of the basin is dominated by surface-waves overtones, and that this mechanism increases the duration of ground motion by more than 170% and 290% of the incoming wavefield duration at 0.5 and 0.3 Hz, respectively, which are two frequencies with the largest observed amplification. This conclusion contradicts what has been previously stated from observational and modeling investigations, where the basin itself has been discarded as a preponderant factor promoting long and devastating shaking in Mexico City. PMID:27934934
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Rennolet, S.; Thompson, E.; Yeck, W.; McNamara, D. E.; Herrmann, R. B.; Powers, P.; Hoover, S. M.
2016-12-01
Recent efforts to characterize the seismic hazard resulting from increased seismicity rates in Oklahoma and Kansas highlight the need for a regionalized ground motion characterization. To support these efforts, we measure and compile strong ground motions and compare these average ground motions intensity measures (IMs) with existing ground motion prediction equations (GMPEs). IMs are computed for available broadband and strong-motion records from M≥3 earthquakes occurring January 2009-April 2016, using standard strong motion processing guidelines. We verified our methods by comparing results from specific earthquakes to other standard procedures such as the USGS Shakemap system. The large number of records required an automated processing scheme, which was complicated by the extremely high rate of small-magnitude earthquakes 2014-2016. Orientation-independent IMs include peak ground motions (acceleration and velocity) and pseudo-spectral accelerations (5 percent damping, 0.1-10 s period). Metadata for the records included relocated event hypocenters. The database includes more than 160,000 records from about 3200 earthquakes. Estimates of the mean and standard deviation of the IMs are computed by distance binning at intervals of 2 km. Mean IMs exhibit a clear break in geometrical attenuation at epicentral distances of about 50-70 km, which is consistent with previous studies in the CEUS. Comparisons of these ground motions with modern GMPEs provide some insight into the relative IMs of induced earthquakes in Oklahoma and Kansas relative to the western U.S. and the central and eastern U.S. The site response for these stations is uncertain because very little is known about shallow seismic velocity in the region, and we make no attempt to correct observed IMs to a reference site conditions. At close distances, the observed IMs are lower than the predictions of the seed GMPEs of the NGA-East project (and about consistent with NGA-West-2 ground motions). This ground motion database may be used to inform future seismic hazard forecast models and in the development of regionally appropriate GMPEs.
Applying matching pursuit decomposition time-frequency processing to UGS footstep classification
NASA Astrophysics Data System (ADS)
Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.
2013-06-01
The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.
Modelling of Seismic and Resistivity Responses during the Injection of CO2 in Sandstone Reservoir
NASA Astrophysics Data System (ADS)
Omar, Muhamad Nizarul Idhafi Bin; Almanna Lubis, Luluan; Nur Arif Zanuri, Muhammad; Ghosh, Deva P.; Irawan, Sonny; Regassa Jufar, Shiferaw
2016-07-01
Enhanced oil recovery plays vital role in production phase in a producing oil field. Initially, in many cases hydrocarbon will naturally flow to the well as respect to the reservoir pressure. But over time, hydrocarbon flow to the well will decrease as the pressure decrease and require recovery method so called enhanced oil recovery (EOR) to recover the hydrocarbon flow. Generally, EOR works by injecting substances, such as carbon dioxide (CO2) to form a pressure difference to establish a constant productive flow of hydrocarbon to production well. Monitoring CO2 performance is crucial in ensuring the right trajectory and pressure differences are established to make sure the technique works in recovering hydrocarbon flow. In this paper, we work on computer simulation method in monitoring CO2 performance by seismic and resistivity model, enabling geoscientists and reservoir engineers to monitor production behaviour as respect to CO2 injection.
NASA Astrophysics Data System (ADS)
Rougier, E.; Knight, E. E.
2015-12-01
The Source Physics Experiments (SPE) is a project funded by the U.S. Department of Energy at the National Nuclear Security Site. The project consists of a series of underground explosive tests designed to gain more insight on the generation and propagation of seismic energy from underground explosions in hard rock media, granite. Until now, four tests (SPE-1, SPE-2, SPE-3 and SPE-4Prime) with yields ranging from 87 kg to 1000 kg have been conducted in the same borehole. The generation and propagation of seismic waves is heavily influenced by the different damage mechanisms occurring at different ranges from the explosive source. These damage mechanisms include pore crushing, compressive (shear) damage, joint damage, spallation and fracture and fragmentation, etc. Understanding these mechanisms and how they interact with each other is essential to the interpretation of the characteristics of close-in seismic observables. Recent observations demonstrate that, for relatively small and shallow chemical explosions in granite, such as SPE-1, -2 and -3, the formation of a cavity around the working point is not the main mechanism responsible for the release of seismic moment. Shear dilatancy (bulking occurring as a consequence of compressive damage) of the medium around the source has been proposed as an alternative damage mechanism that explains the seismic moment release observed in the experiments. In this work, the interaction between cavity formation and bulking is investigated via a series of computer simulations for the SPE-2 event. The simulations are conducted using a newly developed material model, called AZ_Frac. AZ_Frac is a continuum-based-visco-plastic strain-rate-dependent material model. One of its key features is its ability to describe continuum fracture processes, while properly handling anisotropic material characteristics. The implications of the near source numerical results on the close-in seismic quantities, such as reduced displacement potentials and source spectra are presented.
Evaluation of Seismic Risk of Siberia Territory
NASA Astrophysics Data System (ADS)
Seleznev, V. S.; Soloviev, V. M.; Emanov, A. F.
The outcomes of modern geophysical researches of the Geophysical Survey SB RAS, directed on study of geodynamic situation in large industrial and civil centers on the territory of Siberia with the purpose of an evaluation of seismic risk of territories and prediction of origin of extreme situations of natural and man-caused character, are pre- sented in the paper. First of all it concerns the testing and updating of a geoinformation system developed by Russian Emergency Ministry designed for calculations regarding the seismic hazard and response to distructive earthquakes. The GIS database contains the catalogues of earthquakes and faults, seismic zonation maps, vectorized city maps, information on industrial and housing fund, data on character of building and popula- tion in inhabited places etc. The geoinformation system allows to solve on a basis of probabilistic approaches the following problems: - estimating the earthquake impact, required forces, facilities and supplies for life-support of injured population; - deter- mining the consequences of failures on chemical and explosion-dangerous objects; - optimization problems on assurance technology of conduct of salvage operations. Using this computer program, the maps of earthquake risk have been constructed for several seismically dangerous regions of Siberia. These maps display the data on the probable amount of injured people and relative economic damage from an earthquake, which can occur in various sites of the territory according to the map of seismic zona- tion. The obtained maps have allowed determining places where the detailed seismo- logical observations should be arranged. Along with it on the territory of Siberia the wide-ranging investigations with use of new methods of evaluation of physical state of industrial and civil establishments (buildings and structures, hydroelectric power stations, bridges, dams, etc.), high-performance detailed electromagnetic researches of ground conditions of city territories, roads, runways, etc., studying of seismic con- dition in large industrial and civil centers and others.
Effect of strong elastic contrasts on the propagation of seismic wave in hard-rock environments
NASA Astrophysics Data System (ADS)
Saleh, R.; Zheng, L.; Liu, Q.; Milkereit, B.
2013-12-01
Understanding the propagation of seismic waves in a presence of strong elastic contrasts, such as topography, tunnels and ore-bodies is still a challenge. Safety in mining is a major concern and seismic monitoring is the main tool here. For engineering purposes, amplitudes (peak particle velocity/acceleration) and travel times of seismic events (mostly blasts or microseismic events) are critical parameters that have to be determined at various locations in a mine. These parameters are useful in preparing risk maps or to better understand the process of spatial and temporal stress distributions in a mine. Simple constant velocity models used for monitoring studies in mining, cannot explain the observed complexities in scattered seismic waves. In hard-rock environments modeling of elastic seismic wavefield require detailed 3D petrophysical, infrastructure and topographical data to simulate the propagation of seismic wave with a frequencies up to few kilohertz. With the development of efficient numerical techniques, and parallel computation facilities, a solution for such a problem is achievable. In this study, the effects of strong elastic contrasts such as ore-bodies, rough topography and tunnels will be illustrated using 3D modeling method. The main tools here are finite difference code (SOFI3D)[1] that has been benchmarked for engineering studies, and spectral element code (SPECFEM) [2], which was, developed for global seismology problems. The modeling results show locally enhanced peak particle velocity due to presence of strong elastic contrast and topography in models. [1] Bohlen, T. Parallel 3-D viscoelastic finite difference seismic modeling. Computers & Geosciences 28 (2002) 887-899 [2] Komatitsch, D., and J. Tromp, Introduction to the spectral-element method for 3-D seismic wave propagation, Geophys. J. Int., 139, 806-822, 1999.
NASA Astrophysics Data System (ADS)
Fortin, W.; Holbrook, W. S.; Mallick, S.; Everson, E. D.; Tobin, H. J.; Keranen, K. M.
2014-12-01
Understanding the geologic composition of the Cascadia Subduction Zone (CSZ) is critically important in assessing seismic hazards in the Pacific Northwest. Despite being a potential earthquake and tsunami threat to millions of people, key details of the structure and fault mechanisms remain poorly understood in the CSZ. In particular, the position and character of the subduction interface remains elusive due to its relative aseismicity and low seismic reflectivity, making imaging difficult for both passive and active source methods. Modern active-source reflection seismic data acquired as part of the COAST project in 2012 provide an opportunity to study the transition from the Cascadia basin, across the deformation front, and into the accretionary prism. Coupled with advances in seismic inversion methods, this new data allow us to produce detailed velocity models of the CSZ and accurate pre-stack depth migrations for studying geologic structure. While still computationally expensive, current computing clusters can perform seismic inversions at resolutions that match that of the seismic image itself. Here we present pre-stack full waveform inversions of the central seismic line of the COAST survey offshore Washington state. The resultant velocity model is produced by inversion at every CMP location, 6.25 m laterally, with vertical resolution of 0.2 times the dominant seismic frequency. We report a good average correlation value above 0.8 across the entire seismic line, determined by comparing synthetic gathers to the real pre-stack gathers. These detailed velocity models, both Vp and Vs, along with the density model, are a necessary step toward a detailed porosity cross section to be used to determine the role of fluids in the CSZ. Additionally, the P-velocity model is used to produce a pre-stack depth migration image of the CSZ.
NASA Astrophysics Data System (ADS)
Chu, A.
2016-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.
Dynamic characterization of the Chamousset rock column before its fall
NASA Astrophysics Data System (ADS)
Levy, C.; Baillet, L.; Jongmans, D.
2009-04-01
The rockfall of Chamousset (volume of 21000m3 ) occurred on November 10, 2007, affecting the 300 m high Urgonian cliff of the southern Vercors massif, French Alps. This event took place when the Vercors plateau was covered by snow. The unstable column was previously detected by observations on the development of a 30 m long fracture back on the plateau. Two aerial Lidar scans of the cliff were acquired before and after the failure, allowing the geometry of the column and of the broken plane to be determined. A temporary seismic array along with two extensometers was installed from July to November 2007. The seismic array consisted of 7 short period seismometers (1 three-components and 6 vertical-component). One vertical seismometer was installed on the column while the other 6 were deployed on the plateau with an array aperture of about 70 m. During the last two months of record, short period seismometers were replaced by 4.5 Hz geophones. The monitoring system recorded in a continuous mode (1000 Hz of frequency sampling) but it stopped to work two weeks before the fall, after the solar panels were covered by snow. During the running period, the seismic array recorded hundreds of local seismic events, from short (less than 0.5 s) impulsive signals to events with a long duration (a few tens of seconds). Our study was first focused on the dynamic response of the column and on the seismic noise frequency content. Fourier spectra of the seismic noise signals recorded on the column and the corresponding spectral ratios showed the presence of several resonance frequencies of the column. The first resonance frequency was measured at 3.6 Hz in July 2007 and it decreases regularly with time to reach 2.6 Hz two weeks before the fall. In parallel, extensometer measurements show that the fracture aperture increased with time during the same period. The dynamic response of a block which separates from a rock mass was 2D numerically modelled. Finite element computations showed that the progressive block decoupling, resulting from a crack propagation inside the mass, generates a decrease of the natural frequency, as it was measured on the site. These results highlight the interest to study the dynamic response of an unstable column for hazard assessment purposes. In a second phase, we studied the recorded impulsive signals in which we were able to identify P and S waves. Seismic experiments were performed in September 2008 on the plateau in order to constrain the ground velocity structure. Preliminary event location shows that the signal sources were located along the broken plane and probably result from micro-cracks along rock bridges.
NASA Astrophysics Data System (ADS)
Zhu, T.; Ajo Franklin, J. B.; Daley, T. M.
2015-12-01
Continuous active source seismic measurements (CASSM) were collected in the crosswell geometry during scCO2 injection at the Frio-II brine pilot (Liberty, TX). Previous studies (Daley et.al. 2007, 2011) have demonstrated that spatial-temporal changes in the picked first arrival time after CO2 injection constrain the movement of the CO2 plume in the storage interval. To improve the quantitative constraints on plume saturation using this dataset, we investigate spatial-temporal changes in the seismic attenuation of the first arrivals. The attenuation changes over the injection period (~60 h) are estimated by the amount of the centroid frequency shift computed by the local time-frequency analysis. Our observations include: at receivers above the packer seismic attenuation does not change in a physical trend; at receivers below the packer attenuation sharply increases as the amount of CO2 plume increase at the first few hours and peaks at specific points varying with distributed receivers, which are consistent with observations from time delays of first arrivals. Then, attenuation decreases over the injection time with increased amount of CO2 plume. This bell-shaped attenuation response as a function of time in the experiment is consistent with White's patchy saturation model which predicts an attenuation peak at intermediate CO2 saturations. Our analysis suggests that spatial-temporal attenuation change is an indicator of the movement/saturation of CO2 plume at high saturations, a system state for which seismic measurements are typically only weakly sensitive to.
NASA Astrophysics Data System (ADS)
Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki
2017-04-01
Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.
NASA Astrophysics Data System (ADS)
Zhu, Tieyuan; Ajo-Franklin, Jonathan B.; Daley, Thomas M.
2017-09-01
A continuous active source seismic monitoring data set was collected with crosswell geometry during CO2 injection at the Frio-II brine pilot, near Liberty, TX. Previous studies have shown that spatiotemporal changes in the P wave first arrival time reveal the movement of the injected CO2 plume in the storage zone. To further constrain the CO2 saturation, particularly at higher saturation levels, we investigate spatial-temporal changes in the seismic attenuation of the first arrivals. The attenuation changes over the injection period are estimated by the amount of the centroid frequency shift computed by local time-frequency analysis. We observe that (1) at receivers above the injection zone seismic attenuation does not change in a physical trend; (2) at receivers in the injection zone attenuation sharply increases following injection and peaks at specific points varying with distributed receivers, which is consistent with observations from time delays of first arrivals; then, (3) attenuation decreases over the injection time. The attenuation change exhibits a bell-shaped pattern during CO2 injection. Under Frio-II field reservoir conditions, White's patchy saturation model can quantitatively explain both the P wave velocity and attenuation response observed. We have combined the velocity and attenuation change data in a crossplot format that is useful for model-data comparison and determining patch size. Our analysis suggests that spatial-temporal attenuation change is not only an indicator of the movement and saturation of CO2 plumes, even at large saturations, but also can quantitatively constrain CO2 plume saturation when used jointly with seismic velocity.
NASA Astrophysics Data System (ADS)
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission response and the reflection response for a 1D multilayer structure, and next 3D approach (Wapenaar 2004). As a result of this seismic interferometry experiment the 3D reflectivity model (frequencies and resolution ranges) was obtained. We proved also that the seismic interferometry approach can be applied in asynchronous seismic auscultation. The reflections detected in the virtual seismic sections are in agreement with the geological features encountered during the excavation of the tunnel and also with the petrophysical properties and parameters measured in previous geophysical borehole logging. References Claerbout J.F., 1968. Synthesis of a layered medium from its acoustic transmision response. Geophysics, 33, 264-269 Flavio Poletto, Piero Corubolo and Paolo Comeli.2010. Drill-bit seismic interferometry whith and whitout pilot signals. Geophysical Prospecting, 2010, 58, 257-265. Wapenaar, K., J. Thorbecke, and D. Draganov, 2004, Relations between reflection and transmission responses of three-dimensional inhomogeneous media: Geophysical Journal International, 156, 179-194.
,
1999-01-01
This report assesses the status, needs, and associated costs of seismic monitoring in the United States. It sets down the requirement for an effective, national seismic monitoring strategy and an advanced system linking national, regional, and urban monitoring networks. Modernized seismic monitoring can provide alerts of imminent strong earthquake shaking; rapid assessment of distribution and severity of earthquake shaking (for use in emergency response); warnings of a possible tsunami from an offshore earthquake; warnings of volcanic eruptions; information for correctly characterizing earthquake hazards and for improving building codes; and data on response of buildings and structures during earthquakes, for safe, cost-effective design, engineering, and construction practices in earthquake-prone regions.
Seismic waveform modeling over cloud
NASA Astrophysics Data System (ADS)
Luo, Cong; Friederich, Wolfgang
2016-04-01
With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.
Crowd-Sourcing Seismic Data for Education and Research Opportunities with the Quake-Catcher Network
NASA Astrophysics Data System (ADS)
Sumy, D. F.; DeGroot, R. M.; Benthien, M. L.; Cochran, E. S.; Taber, J. J.
2016-12-01
The Quake Catcher Network (QCN; quakecatcher.net) uses low cost micro-electro-mechanical system (MEMS) sensors hosted by volunteers to collect seismic data. Volunteers use accelerometers internal to laptop computers, phones, tablets or small (the size of a matchbox) MEMS sensors plugged into desktop computers using a USB connector to collect scientifically useful data. Data are collected and sent to a central server using the Berkeley Open Infrastructure for Network Computing (BOINC) distributed computing software. Since 2008, sensors installed in museums, schools, offices, and residences have collected thousands of earthquake records, including the 2010 M8.8 Maule, Chile, the 2010 M7.1 Darfield, New Zealand, and 2015 M7.8 Gorkha, Nepal earthquakes. In 2016, the QCN in the United States transitioned to the Incorporated Research Institutions for Seismology (IRIS) Consortium and the Southern California Earthquake Center (SCEC), which are facilities funded through the National Science Foundation and the United States Geological Survey, respectively. The transition has allowed for an influx of new ideas and new education related efforts, which include focused installations in several school districts in southern California, on Native American reservations in North Dakota, and in the most seismically active state in the contiguous U.S. - Oklahoma. We present and describe these recent educational opportunities, and highlight how QCN has engaged a wide sector of the public in scientific data collection, particularly through the QCN-EPIcenter Network and NASA Mars InSight teacher programs. QCN provides the public with information and insight into how seismic data are collected, and how researchers use these data to better understand and characterize seismic activity. Lastly, we describe how students use data recorded by QCN sensors installed in their classrooms to explore and investigate felt earthquakes, and look towards the bright future of the network.
Computational Modeling of Seismic Wave Propagation Velocity-Saturation Effects in Porous Rocks
NASA Astrophysics Data System (ADS)
Deeks, J.; Lumley, D. E.
2011-12-01
Compressional and shear velocities of seismic waves propagating in porous rocks vary as a function of the fluid mixture and its distribution in pore space. Although it has been possible to place theoretical upper and lower bounds on the velocity variation with fluid saturation, predicting the actual velocity response of a given rock with fluid type and saturation remains an unsolved problem. In particular, we are interested in predicting the velocity-saturation response to various mixtures of fluids with pressure and temperature, as a function of the spatial distribution of the fluid mixture and the seismic wavelength. This effect is often termed "patchy saturation' in the rock physics community. The ability to accurately predict seismic velocities for various fluid mixtures and spatial distributions in the pore space of a rock is useful for fluid detection, hydrocarbon exploration and recovery, CO2 sequestration and monitoring of many subsurface fluid-flow processes. We create digital rock models with various fluid mixtures, saturations and spatial distributions. We use finite difference modeling to propagate elastic waves of varying frequency content through these digital rock and fluid models to simulate a given lab or field experiment. The resulting waveforms can be analyzed to determine seismic traveltimes, velocities, amplitudes, attenuation and other wave phenomena for variable rock models of fluid saturation and spatial fluid distribution, and variable wavefield spectral content. We show that we can reproduce most of the published effects of velocity-saturation variation, including validating the Voigt and Reuss theoretical bounds, as well as the Hill "patchy saturation" curve. We also reproduce what has been previously identified as Biot dispersion, but in fact in our models is often seen to be wave multi-pathing and broadband spectral effects. Furthermore, we find that in addition to the dominant seismic wavelength and average fluid patch size, the smoothness of the fluid patches are a critical factor in determining the velocity-saturation response; this is a result that we have not seen discussed in the literature. Most importantly, we can reproduce all of these effects using full elastic wavefield scattering, without the need to resort to more complicated squirt-flow or poroelastic models. This is important because the physical properties and parameters we need to model full elastic wave scattering, and predict a velocity-saturation curve, are often readily available for projects we undertake; this is not the case for poroelastic or squirt-flow models. We can predict this velocity saturation curve for a specific rock type, fluid mixture distribution and wavefield spectrum.
Signal Quality and the Reliability of Seismic Observations
NASA Astrophysics Data System (ADS)
Zeiler, C. P.; Velasco, A. A.; Pingitore, N. E.
2009-12-01
The ability to detect, time and measure seismic phases depends on the location, size, and quality of the recorded signals. Additional constraints are an analyst’s familiarity with a seismogenic zone and with the seismic stations that record the energy. Quantification and qualification of an analyst’s ability to detect, time and measure seismic signals has not been calculated or fully assessed. The fundamental measurement for computing the accuracy of a seismic measurement is the signal quality. Several methods have been proposed to measure signal quality; however, the signal-to-noise ratio (SNR) has been adopted as a short-term average over the long-term average. While the standard SNR is an easy and computationally inexpensive term, the overall statistical significance has not been computed for seismic measurement analysis. The prospect of canonizing the process of cataloging seismic arrivals hinges on the ability to repeat measurements made by different methods and analysts. The first step in canonizing phase measurements has been done by the IASPEI, which established a reference for accepted practices in naming seismic phases. The New Manual for Seismological Observatory Practices (NMSOP, 2002) outlines key observations for seismic phases recorded at different distances and proposes to quantify timing uncertainty with a user-specified windowing technique. However, this added measurement would not completely remove bias introduced by different techniques used by analysts to time seismic arrivals. The general guideline to time a seismic arrival is to record the time where a noted change in frequency and/or amplitude begins. This is generally achieved by enhancing the arrivals through filtering or beam forming. However, these enhancements can alter the characteristics of the arrival and how the arrival will be measured. Furthermore, each enhancement has user-specified parameters that can vary between analysts and this results in reduced ability to repeat measurements between analysts. The SPEAR project (Zeiler and Velasco, 2009) has started to explore the effects of comparing measurements from the same seismograms. Initial results showed that experience and the signal quality are the leading contributors to pick differences. However, the traditional SNR method of measuring signal quality was replaced by a Wide-band Spectral Ratio (WSR) due to a decrease in scatter. This observation brings up an important question of what is the best way to measure signal quality. We compare various methods (traditional SNR, WSR, power spectral density plots, Allan Variance) that have been proposed to measure signal quality and discuss which method provides the best tool to compare arrival time uncertainty.
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2012-12-01
Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building the computational framework for double-difference processing the combined parametric and waveform archives of the ISC, NEIC, and IRIS with over three million recorded earthquakes worldwide. Since our methods are scalable and run on inexpensive Beowulf clusters, periodic re-analysis of such archives may thus become a routine procedure to continuously improve resolution in existing global earthquake catalogs. Results from subduction zones and aftershock sequences of recent great earthquakes demonstrate the considerable social and economic impact that high-resolution images of active faults, when available in real-time, will have in the prompt evaluation and mitigation of seismic hazards. These results also highlight the need for consistent long-term seismic monitoring and archiving of records.
Nonlinear Response Of MSSS Bridges Under Earthquake Ground Motions: Case Studies
DOT National Transportation Integrated Search
1999-10-01
This report presents the results of the second phase of a comprehensive analytical study on the seismic response of highway bridges in New Jersey. The overall objective of this phase of the study was to evaluate the nonlinear seismic response of actu...
NASA Astrophysics Data System (ADS)
Norris, J. Q.
2016-12-01
Published 60 years ago, the Gutenburg-Richter law provides a universal frequency-magnitude distribution for natural and induced seismicity. The GR law is a two parameter power-law with the b-value specifying the relative frequency of small and large events. For large catalogs of natural seismicity, the observed b-values are near one, while fracking associated seismicity has observed b-values near two, indicating relatively fewer large events. We have developed a computationally inexpensive percolation model for fracking that allows us to generate large catalogs of fracking associated seismicity. Using these catalogs, we show that different power-law fitting procedures produce different b-values for the same data set. This shows that care must be taken when determining and comparing b-values for fracking associated seismicity.
Locating Microseism Sources Using Spurious Arrivals in Intercontinental Noise Correlations
NASA Astrophysics Data System (ADS)
Retailleau, Lise; Boué, Pierre; Stehly, Laurent; Campillo, Michel
2017-10-01
The accuracy of Green's functions retrieved from seismic noise correlations in the microseism frequency band is limited by the uneven distribution of microseism sources at the surface of the Earth. As a result, correlation functions are often biased as compared to the expected Green's functions, and they can include spurious arrivals. These spurious arrivals are seismic arrivals that are visible on the correlation and do not belong to the theoretical impulse response. In this article, we propose to use Rayleigh wave spurious arrivals detected on correlation functions computed between European and United States seismic stations to locate microseism sources in the Atlantic Ocean. We perform a slant stack on a time distance gather of correlations obtained from an array of stations that comprises a regional deployment and a distant station. The arrival times and the apparent slowness of the spurious arrivals lead to the location of their source, which is obtained through a grid search procedure. We discuss improvements in the location through this methodology as compared to classical back projection of microseism energy. This method is interesting because it only requires an array and a distant station on each side of an ocean, conditions that can be met relatively easily.
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
Loss modeling for pricing catastrophic bonds.
DOT National Transportation Integrated Search
2008-12-01
In the research, a loss estimation framework is presented that directly relates seismic : hazard to seismic response to damage and hence to losses. A Performance-Based Earthquake : Engineering (PBEE) approach towards assessing the seismic vulnerabili...
Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework
NASA Astrophysics Data System (ADS)
Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.
2015-12-01
Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.
A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.
2012-04-01
Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking can benefit largely from the hardware parallelism provided by the cloud deployment. The resulting output, post-stack section, coherence, and NMO-velocity panels are used to generate a smooth migration-velocity model. Residual static corrections are calculated as a by-product of the stack and can be applied iteratively. As a final step, a time migrated subsurface image is obtained by a parallelized Kirchhoff time migration scheme. Processing can be done step-by-step or using a graphical workflow editor that can launch a series of pipelined tasks. The status of the submitted jobs is monitored by a dedicated service. All results are stored in project directories, where they can be downloaded of viewed directly in the browser. Currently, the portal has access to three research clusters having a total number of 70 nodes with 4 cores each. They are shared with four other cloud-computing applications bundled within the GRIDA3 project. To demonstrate the functionality of our "seismic cloud lab", we will present results obtained for three different types of data, all taken from hydrogeophysical studies: (1) a seismic reflection data set, made of compressional waves from explosive sources, recorded in Muravera, Sardinia; (2) a shear-wave data set from, Sardinia; (3) a multi-offset Ground-Penetrating-Radar data set from Larreule, France. The presented work was funded by the government of the Autonomous Region of Sardinia and by the Italian Ministry of Research and Education.
Real-time seismic monitoring of instrumented hospital buildings
Kalkan, Erol; Fletcher, Jon Peter B.; Leith, William S.; McCarthy, William S.; Banga, Krishna
2012-01-01
In collaboration with the Department of Veterans Affairs (VA), the U.S. Geological Survey's National Strong Motion Project has recently installed sophisticated seismic monitoring systems to monitor the structural health of two hospital buildings at the Memphis VA Medical Center in Tennessee. The monitoring systems in the Bed Tower and Spinal Cord Injury buildings combine sensing technologies with an on-site computer to capture and analyze seismic performance of buildings in near-real time.
Propagation of seismic waves in tall buildings
Safak, E.
1998-01-01
A discrete-time wave propagation formulation of the seismic response of tall buildings is introduced. The building is modeled as a layered medium, similar to a layered soil medium, and is subjected to vertically propagating seismic shear waves. Soil layers and the bedrock under the foundation are incorporated in the formulation as additional layers. Seismic response is expressed in terms of the wave travel times between the layers, and the wave reflection and transmission coefficients at the layer interfaces. The equations account for the frequency-dependent filtering effects of the foundation and floor masses. The calculation of seismic response is reduced to a pair of simple finite-difference equations for each layer, which can be solved recursively starting from the bedrock. Compared to the commonly used vibration formulation, the wave propagation formulation provides several advantages, including simplified calculations, better representation of damping, ability to account for the effects of the soil layers under the foundation, and better tools for identification and damage detection from seismic records. Examples presented show the versatility of the method. ?? 1998 John Wiley & Sons, Ltd.
DOT National Transportation Integrated Search
2009-11-01
The Oregon Department of Transportation and Portland State University evaluated the seismic : vulnerability of state highway bridges in western Oregon. The study used a computer program : called REDARS2 that simulated the damage to bridges within a t...
Experimental investigation of the seismic response of bridge bearings.
DOT National Transportation Integrated Search
2013-05-01
The Illinois Department of Transportation (IDOT) commonly uses elastomeric bearings to accommodate thermal : deformations in bridges. These bearings also present an opportunity to achieve a structural response similar to isolation : during seismic ev...
CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.
2013-12-01
As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.
H-fractal seismic metamaterial with broadband low-frequency bandgaps
NASA Astrophysics Data System (ADS)
Du, Qiujiao; Zeng, Yi; Xu, Yang; Yang, Hongwu; Zeng, Zuoxun
2018-03-01
The application of metamaterial in civil engineering to achieve isolation of a building by controlling the propagation of seismic waves is a substantial challenge because seismic waves, a superposition of longitudinal and shear waves, are more complex than electromagnetic and acoustic waves. In this paper, we design a broadband seismic metamaterial based on H-shaped fractal pillars and report numerical simulation of band structures for seismic surface waves propagating. Comparative study on the band structures of H-fractal seismic metamaterials with different levels shows that a new level of fractal structure creates new band gap, widens the total band gaps and shifts the same band gap towards lower frequencies. Moreover, the vibration modes for H-fractal seismic metamaterials are computed and analyzed to clarify the mechanism of widening band gaps. A numerical investigation of seismic surface waves propagation on a 2D array of fractal unit cells on the surface of semi-infinite substrate is proposed to show the efficiency of earthquake shielding in multiple complete band gaps.
Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs
NASA Astrophysics Data System (ADS)
Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.
2010-12-01
Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.
Can We Estimate Injected Carbon Dioxide Prior to the Repeat Survey in 4D Seismic Monitoring Scheme?
NASA Astrophysics Data System (ADS)
Sakai, A.
2005-12-01
To mitigate global climate change, the geologic sequestration by injecting carbon dioxide in the aquifer and others is one of the most promising scenarios. Monitoring is required to verify the long-term safe storage of carbon dioxide in the subsurface. As evidenced in the oil industry, monitoring by time-lapse 3D seismic survey is the most effective to spatially detect fluid movements and change of pore pressure. We have conducted 3D seismic survey onshore Japan surrounding RITE/METI Iwanohara carbon dioxide injection test site. Target aquifer zone is at 1100m deep in the Pleistocene layer with 60m thick and most permeable zone is approx. 12m thick. Baseline 3D seismic survey was conducted in July-August 2003 and a monitor 3D seismic survey was in July-August 2005 by vibrating source with 10-120Hz sweep frequency band. Prior to the monitor survey, we evaluated seismic data with integrating wireline logging data. As target carbon dioxide injection layer is thin, high-resolution seismic data is required to estimate potential spreading of injected carbon dioxide. To increase seismic resolution, spectrally enhancing method was in use. The procedure is smoothing number of seismic spectral amplitude, computing well log spectrum, and constructing matching filter between seismic and well spectrum. Then it was applied to the whole seismic traces after evaluating test traces. Synthetic seismograms from logging data were computed with extracting optimal wavelets. Fitting between spectrally enhanced seismic traces and synthetic seismograms was excellent even for deviated monitor wells. Acoustic impedance was estimated by inversion of these 3D seismic traces. In analyzing logging data of sonic, density, CMR, and others, the elastic wave velocity was reconstructed by rock physics approach after estimating compositions. Based on models, velocity changes by carbon dioxide injection was evaluated. The correlation of acoustic impedance with porosity and logarithmic permeability was good and relying on this relation and geological constraints with inversion techniques, porosity and permeability was estimated in 3D volume. If the carbon dioxide movement was solely controlled by permeability, estimated permeability volume might predict the time-lapse seismic data prior to a repeat survey. We compare the estimate with the actual 4D changes and discuss related variations.
Seismic intrusion detector system
Hawk, Hervey L.; Hawley, James G.; Portlock, John M.; Scheibner, James E.
1976-01-01
A system for monitoring man-associated seismic movements within a control area including a geophone for generating an electrical signal in response to seismic movement, a bandpass amplifier and threshold detector for eliminating unwanted signals, pulse counting system for counting and storing the number of seismic movements within the area, and a monitoring system operable on command having a variable frequency oscillator generating an audio frequency signal proportional to the number of said seismic movements.
Opto-mechanical lab-on-fibre seismic sensors detected the Norcia earthquake.
Pisco, Marco; Bruno, Francesco Antonio; Galluzzo, Danilo; Nardone, Lucia; Gruca, Grzegorz; Rijnveld, Niek; Bianco, Francesca; Cutolo, Antonello; Cusano, Andrea
2018-04-27
We have designed and developed lab-on-fibre seismic sensors containing a micro-opto-mechanical cavity on the fibre tip. The mechanical cavity is designed as a double cantilever suspended on the fibre end facet and connected to a proof mass to tune its response. Ground acceleration leads to displacement of the cavity length, which in turn can be remotely detected using an interferometric interrogation technique. After the sensors characterization, an experimental validation was conducted at the Italian National Institute of Geophysics and Volcanology (INGV), which is responsible for seismic surveillance over the Italian country. The fabricated sensors have been continuously used for long periods to demonstrate their effectiveness as seismic accelerometer sensors. During the tests, fibre optic seismic accelerometers clearly detected the seismic sequence that culminated in the severe Mw6.5 Norcia earthquake that struck central Italy on October 30, 2016. The seismic data provided by the optical sensors were analysed by specialists at the INGV. The wave traces were compared with state-of-the-art traditional sensors typically incorporated into the INGV seismic networks. The comparison verifies the high fidelity of the optical sensors in seismic wave detection, indicating their suitability for a novel class of seismic sensors to be employed in practical scenarios.
Lattice Boltzmann Simulation of Seismic Mobilization of Residual Oil in Sandstone
NASA Astrophysics Data System (ADS)
Guo, R.; Jiang, F.; Deng, W.
2017-12-01
Seismic stimulation is a promising technology for enhanced oil recovery. However, current mechanism studies are mainly in the single constricted tubes or idealized porous media, and no study has been conducted in real reservoir porous media. We have developed a numerical simulation which uses the lattice Boltzmann method to directly calculate the characteristics of residual oil clusters to quantify seismic mobilization of residual oil in real Berea sandstone in a scale of 400μm x 400μm x 400μm. The residual oil clusters will be firstly obtained by applying the water flooding scheme to the oil-saturated sandstone. Then, we will apply the seismic stimulation to the sandstone by converting the seismic effect to oscillatory inertial force and add to the pore fluids. This oscillatory inertial force causes the mobilization of residual oil by overcoming the capillary force. The response of water and oil to the seismic stimulation will be observed in our simulations. Two seismic oil mobilization mechanisms will be investigated: (1) the passive response of residual oil clusters to the seismic stimulation, and (2) the resonance of oil clusters subject to low frequency seismic stimulation. We will then discuss which mechanism should be the dominant mechanism for the seismic stimulation oil recovery for practical applications.
Racking Response of Reinforced Concrete Cut and Cover Tunnel
DOT National Transportation Integrated Search
2016-01-01
Currently, the knowledge base and quantitative data sets concerning cut and cover tunnel seismic response are scarce. In this report, a large-scale experimental program is conducted to assess: i) stiffness, capacity, and potential seismically-induced...
NASA Astrophysics Data System (ADS)
Krkošková, Katarína; Papán, Daniel; Papánová, Zuzana
2017-10-01
The technical seismicity negatively affects the environment, buildings and structures. Technical seismicity means seismic shakes caused by force impulse, random process and unnatural origin. The vibration influence on buildings is evaluated in the Eurocode 8 in Slovak Republic, however, the Slovak Technical Standard STN 73 0036 includes solution of the technical seismicity. This standard also classes bridges into the group of structures that are significant in light of the technical seismicity - the group “U”. Using the case studies analysis by FEM simulation and comparison is necessary because of brief norm evaluation of this issue. In this article, determinate dynamic parameters by experimental measuring and numerical method on two real bridges are compared. First bridge, (D201 - 00) is Scaffold Bridge on the road I/11 leading to the city of Čadca and is situated in the city of Žilina. It is eleven - span concrete road bridge. The railway is the obstacle, which this bridge spans. Second bridge (M5973 Brodno) is situated in the part of Žilina City on the road of I/11. It is concrete three - span road bridge built as box girder. The computing part includes 3D computational models of the bridges. First bridge (D201 - 00) was modelled in the software of IDA Nexis as the slab - wall model. The model outputs are natural frequencies and natural vibration modes. Second bridge (M5973 Brodno) was modelled in the software of VisualFEA. The technical seismicity corresponds with the force impulse, which was put into this model. The model outputs are vibration displacements, velocities and accelerations. The aim of the experiments was measuring of the vibration acceleration time record of bridges, and there was need to systematic placement of accelerometers. The vibration acceleration time record is important during the under - bridge train crossing, about the first bridge (D201 - 00) and the vibration acceleration time domain is important during deducing the force impulse under the bridge, about second bridge (M5973 Brodno). The analysis was done in the software of Sigview. About the first bridge (D201 - 00), the analysis output were values of power spectral density adherent to the frequencies values. These frequencies were compared with the natural frequencies values from the computational model whereby the technical seismicity influence on bridge natural frequencies was found out. About the second bridge (M5973 Brodno), the Sigview display of recorded vibration velocity time history was compared with the final vibration velocity time history from the computational model, whereby the results were incidental.
NASA Astrophysics Data System (ADS)
Schultz, R.; Atkinson, G. M.; Eaton, D. W. S.; Gu, Y. J.; Kao, H.
2017-12-01
A sharp increase in the frequency of earthquakes near Fox Creek, Alberta began in December 2013 as a result of hydraulic fracturing completions in the Duvernay Formation. Using a newly compiled hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We find that induced earthquakes are associated with pad completions that used larger injection volumes (104-5 m3) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have limited or insignificant correlation with the seismic response. Further findings suggest that geological susceptibilities play a prominent role in seismic productivity, as evidenced by spatial correlations in the seismicity patterns. Together, volume and geological susceptibilities account for 96% of the variability in the induced earthquake rate near Fox Creek. We suggest this result is fit by a modified Gutenberg-Richter earthquake frequency-magnitude distribution which provides a conceptual framework with which to forecast induced seismicity hazard.
Rapid determination of the energy magnitude Me
NASA Astrophysics Data System (ADS)
di Giacomo, D.; Parolai, S.; Bormann, P.; Saul, J.; Grosser, H.; Wang, R.; Zschau, J.
2009-04-01
The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake's damage potential. However, many magnitude scales developed over the past years have different meanings. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy ES (e.g. Bormann et al., 2002): Me = 2/3(log10 ES - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. The calculation of ES requires the integration over frequency of the squared P-waves velocity spectrum corrected for the energy loss experienced by the seismic waves along the path from the source to the receivers. To accout for the frequency-dependent energy loss, we computed spectral amplitude decay functions for different frequenciesby using synthetic Green's functions (Wang, 1999) based on the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We analyse teleseismic broadband P-waves signals in the distance range 20°-98°. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake's origin time. Indeed, we use time variable cumulative energy windows starting 4 s after the first P-wave arrival in order to include the earthquake rupture duration, which is calculated according to Bormann and Saul (2008). We tested our procedure for a large dataset composed by about 750 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of the seismic source, and a combined use of these two magnitude scales would allow a better assessment of the tsunami and shaking potential of an earthquake.. References Bormann, P., Baumbach, M., Bock, G., Grosser, H., Choy, G. L., and Boatwright, J. (2002). Seismic sources and source parameters, in IASPEI New Manual of Seismological Observatory Practice, P. Bormann (Editor), Vol. 1, GeoForschungsZentrum, Potsdam, Chapter 3, 1-94. Bormann, P., and Saul, J. (2008). The new IASPEI standard broadband magnitude mB. Seism. Res. Lett., 79(5), 699-705. Choy, G. L., and Kirby, S. (2004). Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones. Geophys. J. Int., 159, 991-1012. Kennett, B. L. N., Engdahl, E. R., and Buland, R. (1995). Constraints on seismic velocities in the Earth from traveltimes. Geophys. J. Int., 122, 108-124. Montagner, J.-P., and Kennett, B. L. N. (1996). How to reconcile body-wave and normal-mode reference Earth models?. Geophys. J. Int., 125, 229-248. Wang, R. (1999). A simple orthonormalization method for stable and efficient computation of Green's functions. Bull. Seism. Soc. Am., 89(3), 733-741.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Tectonic styles of future earthquakes in Italy as input data for seismic hazard
NASA Astrophysics Data System (ADS)
Pondrelli, S.; Meletti, C.; Rovida, A.; Visini, F.; D'Amico, V.; Pace, B.
2017-12-01
In a recent elaboration of a new seismogenic zonation and hazard model for Italy, we tried to understand how many indications we have on the tectonic style of future earthquake/rupture. Using all available or recomputed seismic moment tensors for relevant seismic events (Mw starting from 4.5) of the last 100 yrs, first arrival focal mechanisms for less recent earthquakes and also geological data on past activated faults, we collected a database gathering a thousands of data all over the Italian peninsula and regions around it. After several summations of seismic moment tensors, over regular grids of different dimensions and different thicknesses of the seismogenic layer, we applied the same procedure to each of the 50 area sources that were designed in the seismogenic zonation. The results for several seismic zones are very stable, e.g. along the southern Apennines we expect future earthquakes to be mostly extensional, although in the outer part of the chain strike-slip events are possible. In the Northern part of the Apennines we also expect different, opposite tectonic styles for different hypocentral depths. In several zones, characterized by a low seismic moment release, defined for the study region using 1000 yrs of catalog, the next possible tectonic style of future earthquakes is less clear. It is worth to note that for some zones the possible greatest earthquake could be not represented in the available observations. We also add to our analysis the computation of the seismic release rate, computed using a distributed completeness, identified for single great events of the historical seismic catalog for Italy. All these information layers, overlapped and compared, may be used to characterize each new seismogenic zone.
Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data
NASA Astrophysics Data System (ADS)
Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan
2016-09-01
Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.
Seismic velocity change and slip rate during the 2006 Guerrero (Mexico) slow slip event
NASA Astrophysics Data System (ADS)
Rivet, Diane; Radiguet, Mathilde; Campillo, Michel; Cotton, Fabrice; Shapiro, Nikolai; Krishna Singh, Shri; Kostoglodov, Vladimir
2010-05-01
We measure temporal change of the seismic velocity in the crust below the Guerrero region during the 2006 slow sleep event (SSE). We use repeated cross-correlations of ambient seismic noise recorded at 26 broad-band stations of the MesoAmerica Seismic Experiment (MASE). The cross-correlations are computed over 90 days with a moving window of 10 days from January 2005 to July 2007. To insure measurements independent of noise source variations, we only take into account the travel time change within the coda. For period of 8 to 20s, we observe a decrease in velocity starting in April 2006 with a maximum change of -0.3% of the initial velocity in June 2006. At these periods, the Rayleigh waves are sensitive to velocity changes down to the lower crust. In the other hand, we compute the deformation rate below the MASE array from a slip propagation model of the SSE observed by means of the displacement time-series of 15 continuous GPS stations. Slip initiates in the western part of the Guerrero Gap and propagates southeastward. The propagation velocity is of the order of 1 km/day. We then compare the seismic velocity change measured from continuous seismological data with the deformation rate inferred from geodetic measurements below the MASE array. We obtain a good agreement between the time of maximal seismic velocity change (July 2006) and the time of maximum deformation associated with the SSE (July to August 2006). This result shows that the long-term velocity change associated with the SSE can be detected using continuous seismic recordings. Since the SSE does not emit seismic waves, which interact with the superficial layers, the result indicates that the velocity change is due to deformation at depth.
Anbazhagan, P; SivakumarBabu, G L; Lakshmikanthan, P; VivekAnand, K S
2016-03-01
Seismic design of landfills requires an understanding of the dynamic properties of municipal solid waste (MSW) and the dynamic site response of landfill waste during seismic events. The dynamic response of the Mavallipura landfill situated in Bangalore, India, is investigated using field measurements, laboratory studies and recorded ground motions from the intraplate region. The dynamic shear modulus values for the MSW were established on the basis of field measurements of shear wave velocities. Cyclic triaxial testing was performed on reconstituted MSW samples and the shear modulus reduction and damping characteristics of MSW were studied. Ten ground motions were selected based on regional seismicity and site response parameters have been obtained considering one-dimensional non-linear analysis in the DEEPSOIL program. The surface spectral response varied from 0.6 to 2 g and persisted only for a period of 1 s for most of the ground motions. The maximum peak ground acceleration (PGA) obtained was 0.5 g and the minimum and maximum amplifications are 1.35 and 4.05. Amplification of the base acceleration was observed at the top surface of the landfill underlined by a composite soil layer and bedrock for all ground motions. Dynamic seismic properties with amplification and site response parameters for MSW landfill in Bangalore, India, are presented in this paper. This study shows that MSW has less shear stiffness and more amplification due to loose filling and damping, which need to be accounted for seismic design of MSW landfills in India. © The Author(s) 2016.
Hydraulic fracturing volume is associated with induced earthquake productivity in the Duvernay play.
Schultz, R; Atkinson, G; Eaton, D W; Gu, Y J; Kao, H
2018-01-19
A sharp increase in the frequency of earthquakes near Fox Creek, Alberta, began in December 2013 in response to hydraulic fracturing. Using a hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We show that induced earthquakes are associated with completions that used larger injection volumes (10 4 to 10 5 cubic meters) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have an insignificant association with seismic response. Further findings suggest that geological factors play a prominent role in seismic productivity, as evidenced by spatial correlations. Together, volume and geological factors account for ~96% of the variability in the induced earthquake rate near Fox Creek. This result is quantified by a seismogenic index-modified frequency-magnitude distribution, providing a framework to forecast induced seismicity. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Seismic microzoning in the metropolitan area of Port - au-Prince - complexity of the subsoil
NASA Astrophysics Data System (ADS)
Gilles, R.; Bertil, D.; Belvaux, M.; Roulle, A.; Noury, G.; Prepetit, C.; Jean-Philippe, J.
2013-12-01
The magnitude 7.3 earthquake that struck Haiti in January 12, 2010 has caused a lot of damages in surrounding areas epicenter. These damages are due to a lack of knowledge of the Haitian subsoil. To overcome this problem, the LNBTP, the BME and BRGM have agreed to implement a project of seismic microzonation of the metropolitan area of Port-au-Prince which is financed by the Fund for the reconstruction of the country. The seismic microzonation is an important tool for knowledge of seismic risk. It is based on a collection of geological, geotechnical, geophysical and measures and recognition and the campaign of numerous sites. It describes a class of specific soils with associated spectral response. The objective of the microzoning is to identify and map the homogeneous zones of lithology, topography, liquefaction and ground movements. The zoning of lithological sites effect is to identify and map areas with geological and geomechanical consistent and homogeneous seismic response; the objective is to provide, in each area, seismic movements adapted to the ground. This zoning is done in about five steps: 1- Cross-analysis of geological, geotechnical and geophysical information; 2- Such information comprise the existing data collected and the data acquired during the project; 3- Identification of homogeneous areas. 4- Definition of one or more columns of representative soils associated with each zone; 5 - Possible consolidation of area to get the final seismic zoning. 27 zones types were considered for the study of sites effects after the analysis of all geological, geotechnical and geophysical data. For example, for the formation of Delmas, there are 5 areas with soil classes ranging from D to C. Soil columns described in the metropolitan area of Port-au-Prince are processed with the CyberQuake software, which is developed at the BRGM by Modaressi et al. in 1997, to calculate their response to seismic rock solicitation. The seismic motion is determined by 4 accelerograms (2 real and 2 altered real) having a spectral response close to the spectrum of acceleration to the rock. In sum, the seismic microzoning presents a better perspective for the preparation of the plan for the prevention of seismic risk (PPRS) and for the establishment of seismic rules in the metropolitan area of Port-au-Prince.
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.
2015-12-01
The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and migrating from file-based communication to MPI messaging, to greatly reduce the I/O demands and node-hour requirements of CyberShake. We will also present performance metrics from CyberShake Study 15.4, and discuss challenges that producers of Big Data on open-science HPC resources face moving forward.
NASA Astrophysics Data System (ADS)
Romanelli, Fabio; Vaccari, Franco; Altin, Giorgio; Panza, Giuliano
2016-04-01
The procedure we developed, and applied to a few relevant cases, leads to the seismic verification of a building by: a) use of a scenario based neodeterministic approach (NDSHA) for the calculation of the seismic input, and b) control of the numerical modeling of an existing building, using free vibration measurements of the real structure. The key point of this approach is the strict collaboration, from the seismic input definition to the monitoring of the response of the building in the calculation phase, of the seismologist and the civil engineer. The vibrometry study allows the engineer to adjust the computational model in the direction suggested by the experimental result of a physical measurement. Once the model has been calibrated by vibrometric analysis, one can select in the design spectrum the proper range of periods of interest for the structure. Then, the realistic values of spectral acceleration, which include the appropriate amplification obtained through the modeling of a "scenario" input to be applied to the final model, can be selected. Generally, but not necessarily, the "scenario" spectra lead to higher accelerations than those deduced by taking the spectra from the national codes (i.e. NTC 2008, for Italy). The task of the verifier engineer is to act so that the solution of the verification is conservative and realistic. We show some examples of the application of the procedure to some relevant (e.g. schools) buildings of the Trieste Province. The adoption of the scenario input has given in most of the cases an increase of critical elements that have to be taken into account in the design of reinforcements. However, the higher cost associated with the increase of elements to reinforce is reasonable, especially considering the important reduction of the risk level.
Seismic waves in 3-D: from mantle asymmetries to reliable seismic hazard assessment
NASA Astrophysics Data System (ADS)
Panza, Giuliano F.; Romanelli, Fabio
2014-10-01
A global cross-section of the Earth parallel to the tectonic equator (TE) path, the great circle representing the equator of net lithosphere rotation, shows a difference in shear wave velocities between the western and eastern flanks of the three major oceanic rift basins. The low-velocity layer in the upper asthenosphere, at a depth range of 120 to 200 km, is assumed to represent the decoupling between the lithosphere and the underlying mantle. Along the TE-perturbed (TE-pert) path, a ubiquitous LVZ, about 1,000-km-wide and 100-km-thick, occurs in the asthenosphere. The existence of the TE-pert is a necessary prerequisite for the existence of a continuous global flow within the Earth. Ground-shaking scenarios were constructed using a scenario-based method for seismic hazard analysis (NDSHA), using realistic and duly validated synthetic time series, and generating a data bank of several thousands of seismograms that account for source, propagation, and site effects. Accordingly, with basic self-organized criticality concepts, NDSHA permits the integration of available information provided by the most updated seismological, geological, geophysical, and geotechnical databases for the site of interest, as well as advanced physical modeling techniques, to provide a reliable and robust background for the development of a design basis for cultural heritage and civil infrastructures. Estimates of seismic hazard obtained using the NDSHA and standard probabilistic approaches are compared for the Italian territory, and a case-study is discussed. In order to enable a reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered, resulting in a new, very efficient, analytical procedure for computing the broadband seismic wave-field in a 3-D anelastic Earth model.
OpenFIRE - A Web GIS Service for Distributing the Finnish Reflection Experiment Datasets
NASA Astrophysics Data System (ADS)
Väkevä, Sakari; Aalto, Aleksi; Heinonen, Aku; Heikkinen, Pekka; Korja, Annakaisa
2017-04-01
The Finnish Reflection Experiment (FIRE) is a land-based deep seismic reflection survey conducted between 2001 and 2003 by a research consortium of the Universities of Helsinki and Oulu, the Geological Survey of Finland, and a Russian state-owned enterprise SpetsGeofysika. The dataset consists of 2100 kilometers of high-resolution profiles across the Archaean and Proterozoic nuclei of the Fennoscandian Shield. Although FIRE data have been available on request since 2009, the data have remained underused outside the original research consortium. The original FIRE data have been quality-controlled. The shot gathers have been cross-checked and comprehensive errata has been created. The brute stacks provided by the Russian seismic contractor have been reprocessed into seismic sections and replotted. A complete documentation of the intermediate processing steps is provided together with guidelines for setting up a computing environment and plotting the data. An open access web service "OpenFIRE" for the visualization and the downloading of FIRE data has been created. The service includes a mobile-responsive map application capable of enriching seismic sections with data from other sources such as open data from the National Land Survey and the Geological Survey of Finland. The AVAA team of the Finnish Open Science and Research Initiative has provided a tailored Liferay portal with necessary web components such as an API (Application Programming Interface) for download requests. INSPIRE (Infrastructure for Spatial Information in Europe) -compliant discovery metadata have been produced and geospatial data will be exposed as Open Geospatial Consortium standard services. The technical guidelines of the European Plate Observing System have been followed and the service could be considered as a reference application for sharing reflection seismic data. The OpenFIRE web service is available at www.seismo.helsinki.fi/openfire
Inverting seismic data for rock physical properties; Mathematical background and application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farfour, Mohammed; Yoon, Wang Jung; Kim, Jinmo
2016-06-08
The basic concept behind seismic inversion is that mathematical assumptions can be established to relate seismic to geological formation properties that caused their seismic responses. In this presentation we address some widely used seismic inversion method in hydrocarbon reservoirs identification and characterization. A successful use of the inversion in real example from gas sand reservoir in Boonsville field, Noth Central Texas is presented. Seismic data was not unambiguous indicator of reservoir facies distribution. The use of the inversion led to remove the ambiguity and reveal clear information about the target.
Surface Wave Tomography of South China Sea from Ambient Seismic Noise and Two-station Measurements
NASA Astrophysics Data System (ADS)
Liang, W.-T.; Gung, Y.-C.
2012-04-01
We have taken the cross-correlation of seismic ambient noise technique as well as the two-station method to analyze the velocity structure in the South China Sea region. The dataset used in this study includes broadband waveforms recorded at the Taiwan BATS (Broadband Array in Taiwan for Seismology), Japan OHP (Ocean Hemisphere Project), Malaysia and Vietnam seismic networks. We remove the instrument response from daily data and filter the waveform with various frequency bands according to the length of each station-pair. Then we apply the commonly used 1-bit normalization to minimize the effect of earthquakes, instrumental irregularities, and non-stationary noise sources near to the stations. With the derived daily cross correlation function (CCF), we are able to examine the timing quality for each station-pair. We then obtain the surface Rayleigh wave dispersion curves from the stacked CCF for each station-pair. To cover the longer period band in the dispersion curves, we adopt the two-station method to compute both the group and phase velocities of surface waves. A new surface wave tomography based on ambient seismic noise study and traditional two-station technique has been achieved in this study. Raypaths that travel through the Central basin present higher velocity, which is in agreement with the idea of thin crust. On the other hand, the slower velocity between Taiwan and Northern Luzon, Philippine is mainly due to a thick accretionary prism above the Manila trench.
Towards Improved Considerations of Risk in Seismic Design (Plinius Medal Lecture)
NASA Astrophysics Data System (ADS)
Sullivan, T. J.
2012-04-01
The aftermath of recent earthquakes is a reminder that seismic risk is a very relevant issue for our communities. Implicit within the seismic design standards currently in place around the world is that minimum acceptable levels of seismic risk will be ensured through design in accordance with the codes. All the same, none of the design standards specify what the minimum acceptable level of seismic risk actually is. Instead, a series of deterministic limit states are set which engineers then demonstrate are satisfied for their structure, typically through the use of elastic dynamic analyses adjusted to account for non-linear response using a set of empirical correction factors. From the early nineties the seismic engineering community has begun to recognise numerous fundamental shortcomings with such seismic design procedures in modern codes. Deficiencies include the use of elastic dynamic analysis for the prediction of inelastic force distributions, the assignment of uniform behaviour factors for structural typologies irrespective of the structural proportions and expected deformation demands, and the assumption that hysteretic properties of a structure do not affect the seismic displacement demands, amongst other things. In light of this a number of possibilities have emerged for improved control of risk through seismic design, with several innovative displacement-based seismic design methods now well developed. For a specific seismic design intensity, such methods provide a more rational means of controlling the response of a structure to satisfy performance limit states. While the development of such methodologies does mark a significant step forward for the control of seismic risk, they do not, on their own, identify the seismic risk of a newly designed structure. In the U.S. a rather elaborate performance-based earthquake engineering (PBEE) framework is under development, with the aim of providing seismic loss estimates for new buildings. The PBEE framework consists of the following four main analysis stages: (i) probabilistic seismic hazard analysis to give the mean occurrence rate of earthquake events having an intensity greater than a threshold value, (ii) structural analysis to estimate the global structural response, given a certain value of seismic intensity, (iii) damage analysis, in which fragility functions are used to express the probability that a building component exceeds a damage state, as a function of the global structural response, (iv) loss analysis, in which the overall performance is assessed based on the damage state of all components. This final step gives estimates of the mean annual frequency with which various repair cost levels (or other decision variables) are exceeded. The realisation of this framework does suggest that risk-based seismic design is now possible. However, comparing current code approaches with the proposed PBEE framework, it becomes apparent that mainstream consulting engineers would have to go through a massive learning curve in order to apply the new procedures in practice. With this in mind, it is proposed that simplified loss-based seismic design procedures are a logical means of helping the engineering profession transition from what are largely deterministic seismic design procedures in current codes, to more rational risk-based seismic design methodologies. Examples are provided to illustrate the likely benefits of adopting loss-based seismic design approaches in practice.
Structural vibration passive control and economic analysis of a high-rise building in Beijing
NASA Astrophysics Data System (ADS)
Chen, Yongqi; Cao, Tiezhu; Ma, Liangzhe; Luo, Chaoying
2009-12-01
Performance analysis of the Pangu Plaza under earthquake and wind loads is described in this paper. The plaza is a 39-story steel high-rise building, 191 m high, located in Beijing close to the 2008 Olympic main stadium. It has both fluid viscous dampers (FVDs) and buckling restrained braces or unbonded brace (BRB or UBB) installed. A repeated iteration procedure in its design and analysis was adopted for optimization. Results from the seismic response analysis in the horizontal and vertical directions show that the FVDs are highly effective in reducing the response of both the main structure and the secondary system. A comparative analysis of structural seismic performance and economic impact was conducted using traditional methods, i.e., increased size of steel columns and beams and/or use of an increased number of seismic braces versus using FVD. Both the structural response and economic analysis show that using FVD to absorb seismic energy not only satisfies the Chinese seismic design code for a “rare” earthquake, but is also the most economical way to improve seismic performance both for one-time direct investment and long term maintenance.
Application of USNRC NUREG/CR-6661 and draft DG-1108 to evolutionary and advanced reactor designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang 'Apollo', Chen
2006-07-01
For the seismic design of evolutionary and advanced nuclear reactor power plants, there are definite financial advantages in the application of USNRC NUREG/CR-6661 and draft Regulatory Guide DG-1108. NUREG/CR-6661, 'Benchmark Program for the Evaluation of Methods to Analyze Non-Classically Damped Coupled Systems', was by Brookhaven National Laboratory (BNL) for the USNRC, and Draft Regulatory Guide DG-1108 is the proposed revision to the current Regulatory Guide (RG) 1.92, Revision 1, 'Combining Modal Responses and Spatial Components in Seismic Response Analysis'. The draft Regulatory Guide DG-1108 is available at http://members.cox.net/apolloconsulting, which also provides a link to the USNRC ADAMS site to searchmore » for NUREG/CR-6661 in text file or image file. The draft Regulatory Guide DG-1108 removes unnecessary conservatism in the modal combinations for closely spaced modes in seismic response spectrum analysis. Its application will be very helpful in coupled seismic analysis for structures and heavy equipment to reduce seismic responses and in piping system seismic design. In the NUREG/CR-6661 benchmark program, which investigated coupled seismic analysis of structures and equipment or piping systems with different damping values, three of the four participants applied the complex mode solution method to handle different damping values for structures, equipment, and piping systems. The fourth participant applied the classical normal mode method with equivalent weighted damping values to handle differences in structural, equipment, and piping system damping values. Coupled analysis will reduce the equipment responses when equipment, or piping system and structure are in or close to resonance. However, this reduction in responses occurs only if the realistic DG-1108 modal response combination method is applied, because closely spaced modes will be produced when structure and equipment or piping systems are in or close to resonance. Otherwise, the conservatism in the current Regulatory Guide 1.92, Revision 1, will overshadow the advantage of coupled analysis. All four participants applied the realistic modal combination method of DG-1108. Consequently, more realistic and reduced responses were obtained. (authors)« less
A seismic reflection velocity study of a Mississippian mud-mound in the Illinois basin
NASA Astrophysics Data System (ADS)
Ranaweera, Chamila Kumari
Two mud-mounds have been reported in the Ullin limestone near, but not in, the Aden oil field in Hamilton County, Illinois. One mud-mound is in the Broughton oil field of Hamilton County 25 miles to the south of Aden. The second mud-mound is in the Johnsonville oil field in Wayne County 20 miles to the north of Aden. Seismic reflection profiles were shot in 2012 adjacent to the Aden oil field to evaluate the oil prospects and to investigate the possibility of detecting Mississippian mud-mounds near the Aden field. A feature on one of the seismic profiles was interpreted to be a mud-mound or carbonate buildup. A well drilled at the location of this interpreted structure provided digital geophysical logs and geological logs used to refine the interpretation of the seismic profiles. Geological data from the new well at Aden, in the form of drill cuttings, have been used to essentially confirm the existence of a mud-mound in the Ullin limestone at a depth of 4300 feet. Geophysical well logs from the new well near Aden were used to create 1-D computer models and synthetic seismograms for comparison to the seismic data. The reflection seismic method is widely used to aid interpreting subsurface geology. Processing seismic data is an important step in the method as a properly processed seismic section can give a better image of the subsurface geology whereas a poorly processed section could mislead the interpretation. Seismic reflections will be more accurately depicted with careful determination of seismic velocities and by carefully choosing the processing steps and parameters. Various data processing steps have been applied and parameters refined to produce improved stacked seismic records. The resulting seismic records from the Aden field area indicate a seismic response similar to what is expected from a carbonate mud-mound. One-dimensional synthetic seismograms were created using the available sonic and density logs from the well drilled near the Aden seismic lines. The 1-D synthetics were used by Cory Cantrell of Royal Drilling and Producing Company to identify various reflections on the seismic records. Seismic data was compared with the modeled synthetic seismograms to identify what appears to be a carbonate mud-mound within the Aden study area. No mud-mounds have been previously found in the Aden oil field. Average and interval velocities obtained from the geophysical logs from the wells drilled in the Aden area was compared with the same type of well velocities from the Broughton known mud-mound area to observe the significance of velocity variation related to the un-known mud-mound in the Aden study area. The results of the velocity study shows a similar trends in the wells from both areas and are higher at the bottom of the wells. Another approach was used to observe the variation of root mean square velocities calculated from the sonic log from the well velocity from the Aden area and the stacking velocities obtained from the seismic data adjacent to the well.
Quantifying the similarity of seismic polarizations
NASA Astrophysics Data System (ADS)
Jones, Joshua P.; Eaton, David W.; Caffagni, Enrico
2016-02-01
Assessing the similarities of seismic attributes can help identify tremor, low signal-to-noise (S/N) signals and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values or known seismic sources. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in S/N ratio. Measuring polarization similarity allows easy identification of site noise and sensor misalignment and can help identify coherent noise and emergent or low S/N phase arrivals. Dissimilar azimuths during phase arrivals indicate misaligned horizontal components, dissimilar incidence angles during phase arrivals indicate misaligned vertical components and dissimilar linear polarization may indicate a secondary noise source. Using records of the Mw = 8.3 Sea of Okhotsk earthquake, from Canadian National Seismic Network broad-band sensors in British Columbia and Yukon Territory, Canada, and a vertical borehole array at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Time-frequency polarization similarities of borehole data suggest that a coherent noise source may have persisted above 8 Hz several months after peak resource extraction from a `flowback' type hydraulic fracture.
NASA Technical Reports Server (NTRS)
Cousineau, R. D.; Crook, R., Jr.; Leeds, D. J.
1985-01-01
This report discusses a geological and seismological investigation of the NASA Ames-Dryden Flight Research Facility site at Edwards, California. Results are presented as seismic design criteria, with design values of the pertinent ground motion parameters, probability of recurrence, and recommended analogous time-history accelerograms with their corresponding spectra. The recommendations apply specifically to the Dryden site and should not be extrapolated to other sites with varying foundation and geologic conditions or different seismic environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toprak, A. Emre; Guelay, F. Guelten; Ruge, Peter
2008-07-08
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performedmore » on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m{sup 2} with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.« less
NASA Astrophysics Data System (ADS)
Sollberger, David; Schmelzbach, Cedric; Robertsson, Johan O. A.; Greenhalgh, Stewart A.; Nakamura, Yosio; Khan, Amir
2016-10-01
Enigmatic lunar seismograms recorded during the Apollo 17 mission in 1972 have so far precluded the identification of shear-wave arrivals and hence the construction of a comprehensive elastic model of the shallow lunar subsurface. Here, for the first time, we extract shear-wave information from the Apollo active seismic data using a novel waveform analysis technique based on spatial seismic wavefield gradients. The star-like recording geometry of the active seismic experiment lends itself surprisingly well to compute spatial wavefield gradients and rotational ground motion as a function of time. These observables, which are new to seismic exploration in general, allowed us to identify shear waves in the complex lunar seismograms, and to derive a new model of seismic compressional and shear-wave velocities in the shallow lunar crust, critical to understand its lithology and constitution, and its impact on other geophysical investigations of the Moon's deep interior.
Seismic activity monitoring in the Izvorul Muntelui dam region
NASA Astrophysics Data System (ADS)
Borleanu, Felix; Otilia Placinta, Anca; Popa, Mihaela; Adelin Moldovan, Iren; Popescu, Emilia
2016-04-01
Earthquakes occurrences near the artificial water reservoirs are caused by stress variation due to the weight of water, weakness of fractures or faults and increasing of pore pressure in crustal rocks. In the present study we aim to investigate how Izvorul Muntelui dam, located in the Eastern Carpathians influences local seismicity. For this purpose we selected from the seismic bulletins computed within National Data Center of National Institute for Earth Physics, Romania, crustal events occurred between 984 and 2015 in a range of 0.3 deg around the artificial lake. Subsequently to improve the seismic monitoring of the region we applied a cross-correlation detector on the continuous recordings of Bicaz (BIZ) seismic stations. Besides the tectonic events we detected sources within this region that periodically generate artificial evens. We couldn't emphasize the existence of a direct correlation between the water level variations and natural seismicity of the investigated area.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
A Parametric Study of Nonlinear Seismic Response Analysis of Transmission Line Structures
Wang, Yanming; Yi, Zhenhua
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures. PMID:25133215
Analysis of the Earthquake Impact towards water-based fire extinguishing system
NASA Astrophysics Data System (ADS)
Lee, J.; Hur, M.; Lee, K.
2015-09-01
Recently, extinguishing system installed in the building when the earthquake occurred at a separate performance requirements. Before the building collapsed during the earthquake, as a function to maintain a fire extinguishing. In particular, the automatic sprinkler fire extinguishing equipment, such as after a massive earthquake without damage to piping also must maintain confidentiality. In this study, an experiment installed in the building during the earthquake, the water-based fire extinguishing saw grasp the impact of the pipe. Experimental structures for water-based fire extinguishing seismic construction step by step, and then applied to the seismic experiment, the building appears in the extinguishing of the earthquake response of the pipe was measured. Construction of acceleration caused by vibration being added to the size and the size of the displacement is measured and compared with the data response of the pipe from the table, thereby extinguishing water piping need to enhance the seismic analysis. Define the seismic design category (SDC) for the four groups in the building structure with seismic criteria (KBC2009) designed according to the importance of the group and earthquake seismic intensity. The event of a real earthquake seismic analysis of Category A and Category B for the seismic design of buildings, the current fire-fighting facilities could have also determined that the seismic performance. In the case of seismic design categories C and D are installed in buildings to preserve the function of extinguishing the required level of seismic retrofit design is determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zucca, J J; Walter, W R; Rodgers, A J
2008-11-19
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring andmore » seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags contemporary research. Current monitoring practice is to use relatively simple Earth models that generally afford analytical prediction of seismic observables (see Examples of Current Monitoring Practice below). Empirical relationships or corrections to predictions are often used to account for unmodeled phenomena, such as the generation of S-waves from explosions or the effect of 3-dimensional Earth structure on wave propagation. This approach produces fast and accurate predictions in areas where empirical observations are available. However, accuracy may diminish away from empirical data. Further, much of the physics is wrapped into an empirical relationship or correction, which limits the ability to fully understand the physical processes underlying the seismic observation. Every generation of seismology researchers works toward quantitative results, with leaders who are active at or near the forefront of what has been computationally possible. While recognizing that only a 3-dimensional model can capture the full physics of seismic wave generation and propagation in the Earth, computational seismology has, until recently, been limited to simplifying model parameterizations (e.g. 1D Earth models) that lead to efficient algorithms. What is different today is the fact that the largest and fastest machines are at last capable of evaluating the effects of generalized 3D Earth structure, at levels of detail that improve significantly over past efforts, with potentially wide application. Advances in numerical methods to compute travel times and complete seismograms for 3D models are enabling new ways to interpret available data. This includes algorithms such as the Fast Marching Method (Rawlison and Sambridge, 2004) for travel time calculations and full waveform methods such as the spectral element method (SEM; Komatitsch et al., 2002, Tromp et al., 2005), higher order Galerkin methods (Kaser and Dumbser, 2006; Dumbser and Kaser, 2006) and advances in more traditional Cartesian finite difference methods (e.g. Pitarka, 1999; Nilsson et al., 2007). The ability to compute seismic observables using a 3D model is only half of the challenge; models must be developed that accurately represent true Earth structure. Indeed, advances in seismic imaging have followed improvements in 3D computing capability (e.g. Tromp et al., 2005; Rawlinson and Urvoy, 2006). Advances in seismic imaging methods have been fueled in part by theoretical developments and the introduction of novel approaches for combining different seismological observables, both of which can increase the sensitivity of observations to Earth structure. Examples of such developments are finite-frequency sensitivity kernels for body-wave tomography (e.g. Marquering et al., 1998; Montelli et al., 2004) and joint inversion of receiver functions and surface wave group velocities (e.g. Julia et al., 2000).« less
An automatic tsunami warning system: TREMORS application in Europe
NASA Astrophysics Data System (ADS)
Reymond, D.; Robert, S.; Thomas, Y.; Schindelé, F.
1996-03-01
An integrated system named TREMORS (Tsunami Risk Evaluation through seismic Moment of a Real-time System) has been installed in EVORA station, in Portugal which has been affected by historical tsunamis. The system is based on a three component long period seismic station linked to a compatible IBM_PC with a specific software. The goals of this system are the followings: detect earthquake, locate them, compute their seismic moment, give a seismic warning. The warnings are based on the seismic moment estimation and all the processing are made automatically. The finality of this study is to check the quality of estimation of the main parameters of interest in a goal of tsunami warning: the location which depends of azimuth and distance, and at last the seismic moment, M 0, which controls the earthquake size. The sine qua non condition for obtaining an automatic location is that the 3 main seismic phases P, S, R must be visible. This study gives satisfying results (automatic analysis): ± 5° errors in azimuth and epicentral distance, and a standard deviation of less than a factor 2 for the seismic moment M 0.
Scenarios for Evolving Seismic Crises: Possible Communication Strategies
NASA Astrophysics Data System (ADS)
Steacy, S.
2015-12-01
Recent advances in operational earthquake forecasting mean that we are very close to being able to confidently compute changes in earthquake probability as seismic crises develop. For instance, we now have statistical models such as ETAS and STEP which demonstrate considerable skill in forecasting earthquake rates and recent advances in Coulomb based models are also showing much promise. Communicating changes in earthquake probability is likely be very difficult, however, as the absolute probability of a damaging event is likely to remain quite small despite a significant increase in the relative value. Here, we use a hybrid Coulomb/statistical model to compute probability changes for a series of earthquake scenarios in New Zealand. We discuss the strengths and limitations of the forecasts and suggest a number of possible mechanisms that might be used to communicate results in an actual developing seismic crisis.
Seismic low-frequency-based calculation of reservoir fluid mobility and its applications
NASA Astrophysics Data System (ADS)
Chen, Xue-Hua; He, Zhen-Hua; Zhu, Si-Xin; Liu, Wei; Zhong, Wen-Li
2012-06-01
Low frequency content of seismic signals contains information related to the reservoir fluid mobility. Based on the asymptotic analysis theory of frequency-dependent reflectivity from a fluid-saturated poroelastic medium, we derive the computational implementation of reservoir fluid mobility and present the determination of optimal frequency in the implementation. We then calculate the reservoir fluid mobility using the optimal frequency instantaneous spectra at the low-frequency end of the seismic spectrum. The methodology is applied to synthetic seismic data from a permeable gas-bearing reservoir model and real land and marine seismic data. The results demonstrate that the fluid mobility shows excellent quality in imaging the gas reservoirs. It is feasible to detect the location and spatial distribution of gas reservoirs and reduce the non-uniqueness and uncertainty in fluid identification.
New approach to detect seismic surface waves in 1Hz-sampled GPS time series
Houlié, N.; Occhipinti, G.; Blanchard, T.; Shapiro, N.; Lognonné, P.; Murakami, M.
2011-01-01
Recently, co-seismic seismic source characterization based on GPS measurements has been completed in near- and far-field with remarkable results. However, the accuracy of the ground displacement measurement inferred from GPS phase residuals is still depending of the distribution of satellites in the sky. We test here a method, based on the double difference (DD) computations of Line of Sight (LOS), that allows detecting 3D co-seismic ground shaking. The DD method is a quasi-analytically free of most of intrinsic errors affecting GPS measurements. The seismic waves presented in this study produced DD amplitudes 4 and 7 times stronger than the background noise. The method is benchmarked using the GEONET GPS stations recording the Hokkaido Earthquake (2003 September 25th, Mw = 8.3). PMID:22355563
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoehler, M; McCallen, D; Noble, C
The analysis, and subsequent retrofit, of concrete arch bridges during recent years has relied heavily on the use of computational simulation. For seismic analysis in particular, computer simulation, typically utilizing linear approximations of structural behavior, has become standard practice. This report presents the results of a comprehensive study of the significance of model sophistication (i.e. linear vs. nonlinear) and pertinent modeling assumptions on the dynamic response of concrete arch bridges. The study uses the Bixby Creek Bridge, located in California, as a case study. In addition to presenting general recommendations for analysis of this class of structures, this report providesmore » an independent evaluation of the proposed seismic retrofit for the Bixby Creek Bridge. Results from the study clearly illustrate a reduction of displacement drifts and redistribution of member forces brought on by the inclusion of material nonlinearity. The analyses demonstrate that accurate modeling of expansion joints, for the Bixby Creek Bridge in particular, is critical to achieve representative modal and transient behavior. The inclusion of near-field displacement pulses in ground motion records was shown to significantly increase demand on the relatively softer, longer period Bixby Creek Bridge arch. Stiffer, shorter period arches, however, are more likely susceptible to variable support motions arising from the canyon topography typical for this class of bridges.« less
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
Archaeological Graves Revealing By Means of Seismic-electric Effect
NASA Astrophysics Data System (ADS)
Boulytchov, A.
[a4paper,12pt]article english Seismic-electric effect was applied in field to forecast subsurface archaeological cul- tural objects. A source of seismic waves were repeated blows of a heavy hammer or powerful signals of magnetostrictive installation. Main frequency used was 500 Hz. Passed a soil layer and reached a second boundary between upper clayey-sand sedi- ments and archaeological object, the seismic wave caused electromagnetic fields on the both boundaries what in general is due to dipole charge separation owe to an im- balance of streaming currents induced by the seismic wave on opposite sides of a boundary interface. According to theoretical works of Pride the electromagnetic field appears on a boundary between two layers with different physical properties in the time of seismic wave propagation. Electric responses of electromagnetic fields were measured on a surface by pair of grounded dipole antennas or by one pivot and a long wire antenna acting as a capacitive pickup. The arrival times of first series of responses correspond to the time of seismic wave propagation from a source to a boundary between soil and clayey-sand layers. The arrival times of second row of responses correspond to the time of seismic wave way from a source to a boundary of clayey-sand layer with the archaeological object. The method depths successfully investigated were between 0.5-10 m. Similar electromagnetic field on another type of geological structure was also revealed by Mikhailov et al., Massachusetts, but their signals registered from two frontiers were too faint and not evident in comparing with ours ones that occurred to be perfect and clear. Seismic-electric method field experi- ments were successfully provided for the first time on archaeological objects.
Natural regeneration on seismic lines influences movement behaviour of wolves and grizzly bears.
Finnegan, Laura; Pigeon, Karine E; Cranston, Jerome; Hebblewhite, Mark; Musiani, Marco; Neufeld, Lalenia; Schmiegelow, Fiona; Duval, Julie; Stenhouse, Gordon B
2018-01-01
Across the boreal forest of Canada, habitat disturbance is the ultimate cause of caribou (Rangifer tarandus caribou) declines. Habitat restoration is a focus of caribou recovery efforts, with a goal to finding ways to reduce predator use of disturbances, and caribou-predator encounters. One of the most pervasive disturbances within caribou ranges in Alberta, Canada are seismic lines cleared for energy exploration. Seismic lines facilitate predator movement, and although vegetation on some seismic lines is regenerating, it remains unknown whether vegetation regrowth is sufficient to alter predator response. We used Light Detection and Ranging (LiDAR) data, and GPS locations, to understand how vegetation and other attributes of seismic lines influence movements of two predators, wolves (Canis lupus) and grizzly bears (Ursus arctos). During winter, wolves moved towards seismic lines regardless of vegetation height, while during spring wolves moved towards seismic lines with higher vegetation. During summer, wolves moved towards seismic lines with lower vegetation and also moved faster near seismic lines with vegetation <0.7 m. Seismic lines with lower vegetation height were preferred by grizzly bears during spring and summer, but there was no relationship between vegetation height and grizzly bear movement rates. These results suggest that wolves use seismic lines for travel during summer, but during winter wolf movements relative to seismic lines could be influenced by factors additional to movement efficiency; potentially enhanced access to areas frequented by ungulate prey. Grizzly bears may be using seismic lines for movement, but could also be using seismic lines as a source of vegetative food or ungulate prey. To reduce wolf movement rate, restoration could focus on seismic lines with vegetation <1 m in height. However our results revealed that seismic lines continue to influence wolf movement behaviour decades after they were built, and even at later stages of regeneration. Therefore it remains unknown at what stage of natural regeneration, if any, wolves cease to respond to seismic lines. To reduce wolf response to seismic lines, active restoration tactics like blocking seismic lines and tree planting, along with management of alternate prey, could be evaluated.
Natural regeneration on seismic lines influences movement behaviour of wolves and grizzly bears
Pigeon, Karine E.; Cranston, Jerome; Hebblewhite, Mark; Musiani, Marco; Neufeld, Lalenia; Schmiegelow, Fiona; Duval, Julie; Stenhouse, Gordon B.
2018-01-01
Across the boreal forest of Canada, habitat disturbance is the ultimate cause of caribou (Rangifer tarandus caribou) declines. Habitat restoration is a focus of caribou recovery efforts, with a goal to finding ways to reduce predator use of disturbances, and caribou-predator encounters. One of the most pervasive disturbances within caribou ranges in Alberta, Canada are seismic lines cleared for energy exploration. Seismic lines facilitate predator movement, and although vegetation on some seismic lines is regenerating, it remains unknown whether vegetation regrowth is sufficient to alter predator response. We used Light Detection and Ranging (LiDAR) data, and GPS locations, to understand how vegetation and other attributes of seismic lines influence movements of two predators, wolves (Canis lupus) and grizzly bears (Ursus arctos). During winter, wolves moved towards seismic lines regardless of vegetation height, while during spring wolves moved towards seismic lines with higher vegetation. During summer, wolves moved towards seismic lines with lower vegetation and also moved faster near seismic lines with vegetation <0.7 m. Seismic lines with lower vegetation height were preferred by grizzly bears during spring and summer, but there was no relationship between vegetation height and grizzly bear movement rates. These results suggest that wolves use seismic lines for travel during summer, but during winter wolf movements relative to seismic lines could be influenced by factors additional to movement efficiency; potentially enhanced access to areas frequented by ungulate prey. Grizzly bears may be using seismic lines for movement, but could also be using seismic lines as a source of vegetative food or ungulate prey. To reduce wolf movement rate, restoration could focus on seismic lines with vegetation <1 m in height. However our results revealed that seismic lines continue to influence wolf movement behaviour decades after they were built, and even at later stages of regeneration. Therefore it remains unknown at what stage of natural regeneration, if any, wolves cease to respond to seismic lines. To reduce wolf response to seismic lines, active restoration tactics like blocking seismic lines and tree planting, along with management of alternate prey, could be evaluated. PMID:29659615
Seismic Propagation in the Kuriles/Kamchatka Region
1980-07-25
model the final profile is well-represented by a spline interpolation. Figure 7 shows the sampling grid used to input velocity perturbations due to the...A modification of Cagniard’s method for s~ lving seismic pulse problems, Appl. Sci. Res. B., 8, p. 349, 1960. Fuchs, K. and G. Muller, Computation of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazari, Siamak; Daley, Thomas M.
2013-02-07
This study was done to assess the repeatability and uncertainty of time-lapse VSP response to CO 2 injection in the Frio formation near Houston Texas. A work flow was built to assess the effect of time-lapse injected CO 2 into two Frio brine reservoir intervals, the ‘C’ sand (Frio1) and the ‘Blue sand’ (Frio2). The time-lapse seismic amplitude variations with sensor depth for both reservoirs Frio1 and Frio2 were computed by subtracting the seismic response of the base survey from each of the two monitor seismic surveys. Source site 1 has been considered as one of the best sites formore » evaluating the time-lapse response after injection. For site 1, the computed timelapse NRMS levels after processing had been compared to the estimated time-lapse NRMS level before processing for different control reflectors, and for brine aquifers Frio1, and Frio2 to quantify detectability of amplitude difference. As the main interest is to analyze the time-lapse amplitude variations, different scenarios have been considered. Three different survey scenarios were considered: the base survey which was performed before injection, monitor1 performed after the first injection operation, and monitor2 which was after the second injection. The first scenario was base-monitor1, the second was basemonitor2, and the third was monitor1-monitor2. We considered three ‘control’ reflections above the Frio to assist removal of overburden changes, and concluded that third control reflector (CR3) is the most favorable for the first scenario in terms of NRMS response, and first control reflector (CR1) is the most favorable for the second and third scenarios in terms of NRMS response. The NRMS parameter is shown to be a useful measure to assess the effect of processing on time-lapse data. The overall NRMS for the Frio VSP data set was found to be in the range of 30% to 80% following basic processing. This could be considered as an estimated baseline in assessing the utility of VSP for CO 2 monitoring. This study shows that the CO 2 injection in brine reservoir Frio1 (the ‘C’ sand unit) does induce a relative change in amplitude response, and for Frio2 (the ‘Blue’ sand unit) an amplitude change has been also detected, but in both cases the uncertainty, as measured by NRMS indicates the reservoir changes are, at best, only slightly above the noise level, and often below the noise level of the overall data set.« less
NASA Astrophysics Data System (ADS)
Heckels, R. EG; Savage, M. K.; Townend, J.
2018-05-01
Quantifying seismic velocity changes following large earthquakes can provide insights into fault healing and reloading processes. This study presents temporal velocity changes detected following the 2010 September Mw 7.1 Darfield event in Canterbury, New Zealand. We use continuous waveform data from several temporary seismic networks lying on and surrounding the Greendale Fault, with a maximum interstation distance of 156 km. Nine-component, day-long Green's functions were computed for frequencies between 0.1 and 1.0 Hz for continuous seismic records from immediately after the 2010 September 04 earthquake until 2011 January 10. Using the moving-window cross-spectral method, seismic velocity changes were calculated. Over the study period, an increase in seismic velocity of 0.14 ± 0.04 per cent was determined near the Greendale Fault, providing a new constraint on post-seismic relaxation rates in the region. A depth analysis further showed that velocity changes were confined to the uppermost 5 km of the subsurface. We attribute the observed changes to post-seismic relaxation via crack healing of the Greendale Fault and throughout the surrounding region.
Bi-directional vibration control of offshore wind turbines using a 3D pendulum tuned mass damper
NASA Astrophysics Data System (ADS)
Sun, C.; Jahangiri, V.
2018-05-01
Offshore wind turbines suffer from excessive bi-directional vibrations due to wind-wave misalignment and vortex induced vibrations. However, most of existing research focus on unidirectional vibration attenuation which is inadequate for real applications. The present paper proposes a three dimensional pendulum tuned mass damper (3d-PTMD) to mitigate the tower and nacelle dynamic response in the fore-aft and side-side directions. An analytical model of the wind turbine coupled with the 3d-PTMD is established wherein the interaction between the blades, the tower and the 3d-PTMD is modeled. Aerodynamic loading is computed using the Blade Element Momentum method where the Prandtls tip loss factor and the Glauert correction are considered. JONSWAP spectrum is adopted to generate wave data. Wave loading is computed using Morisons equation in collaboration with the strip theory. Via a numerical search approach, the design formula of the 3d-PTMD is obtained and examined on a National Renewable Energy Lab (NREL) monopile 5 MW baseline wind turbine model under misaligned wind, wave and seismic loading. Dual linear tuned mass dampers (TMDs) deployed in the fore-aft and side-side directions are utilized for comparison. It is found that the 3d-PTMD with a mass ratio of 2 % can improve the mitigation of the root mean square and peak response by around 10 % when compared with the dual linear TMDs in controlling the bi-directional vibration of the offshore wind turbines under misaligned wind, wave and seismic loading.
NASA Astrophysics Data System (ADS)
Kwak, Sangmin; Song, Seok Goo; Kim, Geunyoung; Cho, Chang Soo; Shin, Jin Soo
2017-10-01
Using recordings of a mine collapse event (Mw 4.2) in South Korea in January 2015, we demonstrated that the phase and amplitude information of impulse response functions (IRFs) can be effectively retrieved using seismic interferometry. This event is equivalent to a single downward force at shallow depth. Using quantitative metrics, we compared three different seismic interferometry techniques—deconvolution, coherency, and cross correlation—to extract the IRFs between two distant stations with ambient seismic noise data. The azimuthal dependency of the source distribution of the ambient noise was also evaluated. We found that deconvolution is the best method for extracting IRFs from ambient seismic noise within the period band of 2-10 s. The coherency method is also effective if appropriate spectral normalization or whitening schemes are applied during the data processing.
Improving Seismic Data Accessibility and Performance Using HDF Containers
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Wang, J.; Yang, R.
2017-12-01
The performance of computational geophysical data processing and forward modelling relies on both computational and data. Significant efforts on developing new data formats and libraries have been made the community, such as IRIS/PASSCAL and ASDF in data, and programs and utilities such as ObsPy and SPECFEM. The National Computational Infrastructure hosts a national significant geophysical data collection that is co-located with a high performance computing facility and provides an opportunity to investigate how to improve the data formats from both a data management and a performance point of view. This paper investigates how to enhance the data usability in several perspectives: 1) propose a convention for the seismic (both active and passive) community to improve the data accessibility and interoperability; 2) recommend the convention used in the HDF container when data is made available in PH5 or ASDF formats; 3) provide tools to convert between various seismic data formats; 4) provide performance benchmark cases using ObsPy library and SPECFEM3D to demonstrate how different data organization in terms of chunking size and compression impact on the performance by comparing new data formats, such as PH5 and ASDF to traditional formats such as SEGY, SEED, SAC, etc. In this work we apply our knowledge and experience on data standards and conventions, such as CF and ACDD from the climate community to the seismology community. The generic global attributes widely used in climate community are combined with the existing convention in the seismology community, such as CMT and QuakeML, StationXML, SEGY header convention. We also extend such convention by including the provenance and benchmarking records so that the r user can learn the footprint of the data together with its baseline performance. In practise we convert the example wide angle reflection seismic data from SEGY to PH5 or ASDF by using ObsPy and pyasdf libraries. It quantitatively demonstrates how the accessibility can be improved if the seismic data are stored in the HDF container.
Monitoring Instrument Performance in Regional Broadband Seismic Network Using Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Ye, F.; Lyu, S.; Lin, J.
2017-12-01
In the past ten years, the number of seismic stations has increased significantly, and regional seismic networks with advanced technology have been gradually developed all over the world. The resulting broadband data help to improve the seismological research. It is important to monitor the performance of broadband instruments in a new network in a long period of time to ensure the accuracy of seismic records. Here, we propose a method that uses ambient noise data in the period range 5-25 s to monitor instrument performance and check data quality in situ. The method is based on an analysis of amplitude and phase index parameters calculated from pairwise cross-correlations of three stations, which provides multiple references for reliable error estimates. Index parameters calculated daily during a two-year observation period are evaluated to identify stations with instrument response errors in near real time. During data processing, initial instrument responses are used in place of available instrument responses to simulate instrument response errors, which are then used to verify our results. We also examine feasibility of the tailing noise using data from stations selected from USArray in different locations and analyze the possible instrumental errors resulting in time-shifts used to verify the method. Additionally, we show an application that effects of instrument response errors that experience pole-zeros variations on monitoring temporal variations in crustal properties appear statistically significant velocity perturbation larger than the standard deviation. The results indicate that monitoring seismic instrument performance helps eliminate data pollution before analysis begins.
Using Building Seismic Strong-Motion Data to Quantify Urban Blast Pressure Fields
NASA Astrophysics Data System (ADS)
Massari, A.; Kohler, M. D.; Heaton, T. H.; Kanamori, H.; Hauksson, E.; Clayton, R. W.; Guy, R.; Bunn, J.; Chandy, M.
2015-12-01
The use of building vibrations to measure blast wave propagation in a city is examined in this case study. The Exxon Mobil Corp. oil refinery in Torrance, California experienced an explosion on February 18, 2015 causing ground shaking equivalent to a magnitude 1.9 earthquake. The impulse response for the source was computed from Southern California Seismic Network data for a multi-orthogonal force system with a value of 2×105 kN vertically downward. The pressure wave excited by the explosion traveled through the city of Los Angeles, and was detected by a dense accelerometer array in a 52-story building also in downtown Los Angeles 22.8 km from the explosion. The array is part of the Community Seismic Network (CSN) and consists of three-component class-C MEMs sensors located on each floor of the building. The detection was verified by the nearly simultaneous arrival times of acceleration pulses on multiple floors of the building, corresponding to an average wave speed near the speed of sound in air. The pressure wave peak magnitude from the air blast was determined using accelerometer data collected on every floor of the building coupled with the elastic response of the structure as a whole. . Making use of high-fidelity finite element modeling of the building validated by previous low-level seismicity and ambient noise data, a procedure is outlined for pressure wave detection and quantification on well instrumented buildings. This case study for a 52 story building, instrumented by the CSN, acts as a proxy for blast wave quantification in dense urban environments. This type of information can be used to understand the flow of blast waves through a cityscape as well as enhance procedures for estimating blast source magnitude. Better understanding of the propagation of pressure waves in urban environments will lead to the development of improved countermeasures in those environments.
NASA Astrophysics Data System (ADS)
Keifer, I. S.; Dueker, K. G.
2016-12-01
In an effort to characterize critical zone development in varying regions, seismologist conduct seismic surveys to assist in the realization of critical zone properties e.g. porosity and regolith thickness. A limitation of traditional critical zone seismology is that data is normally collected along lines, to generate two dimensional transects of the subsurface seismic velocity, even though the critical zone structure is 3D. Hence, we deployed six seismic 2D arrays in southeastern Wyoming to gather ambient seismic fields so that 3D shear velocity models could be produced. The arrays were made up of nominally 400 seismic stations arranged in a 200-meter square grid layout. Each array produced a half Terabyte data volume, so a premium was placed on computational efficiency throughout this study, to handle the roughly 65 billion samples recorded by each array. The ambient fields were cross-correlated on the Yellowstone Super-Computer using the pSIN code (Chen et al., 2016), which decreased correlation run times by a factor of 300 with respect to workstation computers. Group delay times extracted from cross-correlations using 8 Hz frequency bands from 10 Hz to 100 Hz show frequency dispersion at sites with shallow regolith underlain by granite bedrock. Dimensionally, the group velocity map inversion is overdetermined, even after extensive culling of spurious group delay times. Model Resolution matrices for our six arrays show values > 0.7 for most of the modal domain, approaching unity at the center of the model domain; we are then confident that we have an adequate number of rays covering our array space, and should experience minimal smearing of our resultant model due to application of inverse solution on the data. After inverting for the group velocity maps, a second inversion is performed of the group velocity maps for the 3D shear velocity model. This inversion is underdetermined and a second order Tikhonov regularization is used to obtain stable inverse images. Results will be presented.
Big Data and High-Performance Computing in Global Seismology
NASA Astrophysics Data System (ADS)
Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen
2014-05-01
Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data formats for seismograms and outputs of our 3D solvers (i.e., meshes, kernels, seismic models, etc.) based on ORNL's ADIOS libraries. We will discuss our global adjoint tomography workflow on HPC systems as well as the current status of our global inversions.
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Evaluating Seismic Site Effects at Cultural Heritage Sites in the Mediterranean Area
NASA Astrophysics Data System (ADS)
Imposa, S.; D'Amico, S.; Panzera, F.; Lombardo, G.; Grassi, S.; Betti, M.; Muscat, R.
2017-12-01
Present study concern integrated geophysical and numerical simulation aiming at evaluate the seismic vulnerability of cultural heritage sites. Non-invasive analysis targeted to characterize local site effects as well as dynamic properties of the structure were performed. Data were collected at several locations in the Maltese Archipelago (central Mediterranean) and in some historical buildings located in Catania (Sicily). In particular, passive seismic techniques and H/V data where used to derive 1D velocity models and amplification functions. The dynamic properties of a building are usually described through its natural frequency and the damping ratio. This latter is important in seismic design since it allows one to evaluate the ability of a structure to dissipate the vibration energy during an earthquake. The fundamental frequency of the investigated structure was obtained using ambient vibrations recorded by two or more sensors monitoring the motion at different locations in the building. Accordingly, the fundamental period of several Maltese Watchtowers and some historical buildings of Catania were obtained by computing the ratio between the amplitudes of the Fourier spectrum of horizontal (longitudinal and transverse) components recorded on the top and on the ground floors. Using ANSYS code, the modal analysis was performed to evaluate the first 50 vibration modes with the aim to check the activation of the modal masses and to assess the seismic vulnerability of the tower. The STRATA code was instead adopted in the Catania heritage buildings using as reference earthquake moderate to strong shocks that struck south-eastern Sicily. In most of the investigated buildings is was not possible to identify a single natural frequency but several oscillation modes. These results appear linked to the structural complexity of the edifices, their irregular plan shape and the presence of adjacent structures. The H/V outside the buildings were used to determine predominant frequencies of the soil and to highlight potential site-to-structure resonance. The achieved findings can represent useful clues for further additional engineering investigations aiming at reducing the seismic risk, highlighting how the structural complexity and the local seismic response play an important role on building damage.
Non-Seismology Seismology: Using QuakeCatchers to Analyze the Frequency of Bridge Vibrations
NASA Astrophysics Data System (ADS)
Courtier, A. M.; Constantin, C.; Wilson, C. F.
2013-12-01
We conducted an experiment to test the feasibility of measuring seismic waves generated by traffic near James Madison University. We used QuakeCatcher seismometers (originally designed for passive seismic measurement) to measure vibrations associated with traffic on a wooden bridge as well as a nearby concrete bridge. This experiment was a signal processing exercise for a student research project and did not draw any conclusions regarding bridge safety or security. The experiment consisted of two temporary measurement stations comprised of a laptop computer and a QuakeCatcher - a small seismometer that plugs directly into the laptop via a USB cable. The QuakeCatcher was taped to the ground at the edge of the bridge to achieve good coupling, and vibrational events were triggered repeatedly with a control vehicle to accumulate a consistent dataset of the bridge response. For the wooden bridge, the resulting 'seismograms' were converted to Seismic Analysis Code (SAC) format and analyzed in MATLAB. The concrete bridge did not generate vibrations significant enough to trigger the recording mechanism on the QuakeCatchers. We will present an overview of the experimental design and frequency content of the traffic patterns, as well as a discussion of the instructional benefits of using the QuakeCatcher sensors in this non-traditional setting.
SEISRISK II; a computer program for seismic hazard estimation
Bender, Bernice; Perkins, D.M.
1982-01-01
The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.
Viscoelastic Finite Difference Modeling Using Graphics Processing Units
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.
2014-12-01
Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.
Western Greenland Subglacial Hydrologic Modeling and Observables: Seismicity and GPS
NASA Astrophysics Data System (ADS)
Carmichael, J. D.; Joughin, I. R.
2010-12-01
I present a hydro-mechanical model of the Western Greenland ice sheet with surface observables for two modes of meltwater input. Using input prescribed from distributed surface data, First, I bound the subglacial carrying capacity for both a distributed and localized system, in a typical summer. I provide observations of the ambient seismic response and its support for an established surface-to-bed connection. Second, I show the ice sheet response to large impulsive hydraulic inputs (lake drainage events) should produce distinct seismic observables that depend upon the localization of the drainage systems. In the former case, the signal propagates as a diffusive wave, while the channelized case, the response is localized. I provide a discussion of how these results are consistent with previous reports (Das et al, 2008, Joughin et al, 2008) of melt-induced speedup along Greenland's Western Flank. Late summer seismicity for a four-receiver array deployed near a supraglacial lake, 68 44.379N, 49 30.064W. Clusters of seismic activity are characterized by dominant shear-wave energy, consistent with basal sliding events.
Instantaneous Frequency Attribute Comparison
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.
2013-12-01
The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.
Thompson, Eric M.; Carkin, Bradley A.; Baise, Laurie G.; Kayen, Robert E.
2014-01-01
The geotechnical properties of the soils in and around Boston, Massachusetts, have been extensively studied. This is partly due to the importance of the Boston Blue Clay and the extent of landfill in the Boston area. Although New England is not a region that is typically associated with seismic hazards, there have been several historical earthquakes that have caused significant ground shaking (for example, see Street and Lacroix, 1979; Ebel, 1996; Ebel, 2006). The possibility of strong ground shaking, along with heightened vulnerability from unreinforced masonry buildings, motivates further investigation of seismic hazards throughout New England. Important studies that are pertinent to seismic hazards in New England include source-parameter studies (Somerville and others, 1987; Boore and others, 2010), wave-propagation studies (Frankel, 1991; Viegas and others, 2010), empirical ground-motion prediction equations (GMPE) for computing ground-motion intensity (Tavakoli and Pezeshk, 2005; Atkinson and Boore, 2006), site-response studies (Hayles and others, 2001; Ebel and Kim, 2006), and liquefaction studies (Brankman and Baise, 2008). The shear-wave velocity (VS) profiles collected for this report are pertinent to the GMPE, site response, and liquefaction aspects of seismic hazards in the greater Boston area. Besides the application of these data for the Boston region, the data may be applicable throughout New England, through correlations with geologic units (similar to Ebel and Kim, 2006) or correlations with topographic slope (Wald and Allen, 2007), because few VS measurements are available in stable tectonic regions.Ebel and Hart (2001) used felt earthquake reports to infer amplification patterns throughout the greater Boston region and noted spatial correspondence with the dominant period and amplification factors obtained from ambient noise (horizontal-to-vertical ratios) by Kummer (1998). Britton (2003) compiled geotechnical borings in the area and produced a microzonation map based on generalized velocity profiles, where the amplifications were computed using Shake (Schnable and others, 1972), along with an assumed input ground motion. The velocities were constrained by only a few local measurements associated with the Central Artery/Tunnel project. The additional VS measurements presented in this report provide a number of benefits. First, these measurements provide improved spatial coverage. Second, the larger sample size provides better constraints on the mean and variance of the VS distribution for each layer, which may be paired with a three-dimensional (3D) model of the stratigraphy to generate one-dimensional (1D) profiles for use in a standard site-response analysis (for example, Britton, 2003). Third, the velocity profiles may also be used, along with a 3D model of the stratigraphy, as input into a 3D simulation of the ground motion to investigate the effects of basin-generated surface waves and the potential focusing of seismic waves.This report begins with a short review of the geology of the study area and the field methods that we used to estimate the velocity profiles. The raw data, processed data, and the interpreted VS profiles are given in appendix 1. Photographs and descriptions of the sites are provided in appendix 2.
Identifying High Potential Well Targets with 3D Seismic and Mineralogy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R. J.
2015-10-30
Seismic reflection the primary tool used in petroleum exploration and production, but use in geothermal exploration is less standard, in part due to cost but also due to the challenges in identifying the highly-permeable zones essential for economic hydrothermal systems [e.g. Louie et al., 2011; Majer, 2003]. Newer technology, such as wireless sensors and low-cost high performance computing, has helped reduce the cost and effort needed to conduct 3D surveys. The second difficulty, identifying permeable zones, has been less tractable so far. Here we report on the use of seismic attributes from a 3D seismic survey to identify and mapmore » permeable zones in a hydrothermal area.« less
Seismic Design of a Single Bored Tunnel: Longitudinal Deformations and Seismic Joints
NASA Astrophysics Data System (ADS)
Oh, J.; Moon, T.
2018-03-01
The large diameter bored tunnel passing through rock and alluvial deposits subjected to seismic loading is analyzed for estimating longitudinal deformations and member forces on the segmental tunnel liners. The project site has challenges including high hydrostatic pressure, variable ground profile and high seismic loading. To ensure the safety of segmental tunnel liner from the seismic demands, the performance-based two-level design earthquake approach, Functional Evaluation Earthquake and Safety Evaluation Earthquake, has been adopted. The longitudinal tunnel and ground response seismic analyses are performed using a three-dimensional quasi-static linear elastic and nonlinear elastic discrete beam-spring elements to represent segmental liner and ground spring, respectively. Three components (longitudinal, transverse and vertical) of free-field ground displacement-time histories evaluated from site response analyses considering wave passage effects have been applied at the end support of the strain-compatible ground springs. The result of the longitudinal seismic analyses suggests that seismic joint for the mitigation measure requiring the design deflection capacity of 5-7.5 cm is to be furnished at the transition zone between hard and soft ground condition where the maximum member forces on the segmental liner (i.e., axial, shear forces and bending moments) are induced. The paper illustrates how detailed numerical analyses can be practically applied to evaluate the axial and curvature deformations along the tunnel alignment under difficult ground conditions and to provide the seismic joints at proper locations to effectively reduce the seismic demands below the allowable levels.
Wave-propagation formulation of seismic response of multistory buildings
Safak, E.
1999-01-01
This paper presents a discrete-time wave-propagation method to calculate the seismic response of multistory buildings, founded on layered soil media and subjected to vertically propagating shear waves. Buildings are modeled as an extension of the layered soil media by considering each story as another layer in the wave-propagation path. The seismic response is expressed in terms of wave travel times between the layers and wave reflection and transmission coefficients at layer interfaces. The method accounts for the filtering effects of the concentrated foundation and floor masses. Compared with commonly used vibration formulation, the wave-propagation formulation provides several advantages, including simplicity, improved accuracy, better representation of damping, the ability to incorporate the soil layers under the foundation, and providing better tools for identification and damage detection from seismic records. Examples are presented to show the versatility and the superiority of the method.
Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization
NASA Astrophysics Data System (ADS)
Dodge, Doug; Walter, William; Myers, Steve; Ford, Sean; Harris, Dave; Ruppert, Stan; Buttler, Dave; Hauk, Terri
2013-04-01
The decrease in costs of both digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory(LLNL) we operate a growing research database of seismic events and waveforms for nuclear explosion monitoring and other applications. Currently the LLNL database contains several million events associated with tens of millions of waveforms at thousands of stations. We are making use of this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. Building on the very efficient correlation methodologies of Harris and Dodge (2011) we computed the waveform correlation for event pairs in the LLNL database in two ways. First we performed entire waveform cross-correlation over seven distinct frequency bands. The correlation coefficient exceeds 0.6 for more than 40 million waveform pairs for several hundred thousand events at more than a thousand stations. These correlations reveal clusters of mining events and aftershock sequences, which can be used to readily identify and locate events. Second we determine relative pick times by correlating signals in time windows for distinct seismic phases. These correlated picks are then used to perform very high accuracy event relocations. We are examining the percentage of events that correlate as a function of magnitude and observing station distance in selected high seismicity regions. Combining these empirical results and those using synthetic data, we are working to quantify relationships between correlation and event pair separation (in epicenter and depth) as well as mechanism differences. Our exploration of these techniques on a large seismic database is in process and we will report on our findings in more detail at the meeting.
Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization
NASA Astrophysics Data System (ADS)
Dodge, D.; Walter, W. R.; Myers, S. C.; Ford, S. R.; Harris, D.; Ruppert, S.; Buttler, D.; Hauk, T. F.
2012-12-01
The decrease in costs of both digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory (LLNL) we operate a growing research database of seismic events and waveforms for nuclear explosion monitoring and other applications. Currently the LLNL database contains several million events associated with tens of millions of waveforms at thousands of stations. We are making use of this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. Building on the very efficient correlation methodologies of Harris and Dodge (2011) we computed the waveform correlation for event pairs in the LLNL database in two ways. First we performed entire waveform cross-correlation over seven distinct frequency bands. The correlation coefficient exceeds 0.6 for more than 40 million waveform pairs for several hundred thousand events at more than a thousand stations. These correlations reveal clusters of mining events and aftershock sequences, which can be used to readily identify and locate events. Second we determine relative pick times by correlating signals in time windows for distinct seismic phases. These correlated picks are then used to perform very high accuracy event relocations. We are examining the percentage of events that correlate as a function of magnitude and observing station distance in selected high seismicity regions. Combining these empirical results and those using synthetic data, we are working to quantify relationships between correlation and event pair separation (in epicenter and depth) as well as mechanism differences. Our exploration of these techniques on a large seismic database is in process and we will report on our findings in more detail at the meeting.
NASA Astrophysics Data System (ADS)
Wu, Gang; Wang, Kehai; Zhang, Panpan; Lu, Guanya
2018-01-01
Laminated elastomeric bearings have been widely used for small-to-medium-span highway bridges in China, in which concrete shear keys are set transversely to prohibit large girder displacement. To evaluate bridge seismic responses more accurately, proper analytical models of bearings and shear keys should be developed. Based on a series of cyclic loading experiments and analyses, rational analytical models of laminated elastomeric bearings and shear keys, which can consider mechanical degradation, were developed. The effect of the mechanical degradation was investigated by examining the seismic response of a small-to-medium-span bridge in the transverse direction under a wide range of peak ground accelerations (PGA). The damage mechanism for small-to-medium-span highway bridges was determined, which can explain the seismic damage investigation during earthquakes in recent years. The experimental results show that the mechanical properties of laminated elastomeric bearings will degrade due to friction sliding, but the degree of decrease is dependent upon the influencing parameters. It can be concluded that the mechanical degradation of laminated elastomeric bearings and shear keys play an important role in the seismic response of bridges. The degradation of mechanical properties of laminated elastomeric bearings and shear keys should be included to evaluate more precise bridge seismic performance.
Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench
NASA Astrophysics Data System (ADS)
Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan
2016-04-01
Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.
Seismic response in archaeological areas: the case-histories of Rome
NASA Astrophysics Data System (ADS)
Donati, Stefano; Funiciello, Renato; Rovelli, Antonio
1999-03-01
Rome is affected by earthquakes associated to three different seismogenic districts: the Central Apennines area, the Colli Albani volcanic area and the Roman area. The major effects were exclusively due to Apennine seismicity and reached in some cases felt intensities up to VII-VIII degree (MCS scale). The predominant role in the damage distribution seems to be played by the local geological conditions. The historical centre of the city is characterized by the presence of two geomorphologic domains: the alluvial plain of Tiber river and the topographic relieves of Roman Hills, where tradition indicates the first site of the city foundation. In particular, the right river side is characterized by the outcropping of the regional bedrock along the Monte Mario-Gianicolo ridge, while the eastern relieves are the remnants of the Sabatini and Albani volcanic plateau, deeply eroded by the Tiber river and its tributaries during the last glacial low-stand (Würm). These domains are characterized by a large difference in seismic response, due to the high impedance contrast between Holocene coarse deposits filling the Tiber Valley and sedimentary and volcanic Plio-Pleistocene units. Seismic damage observed in 150 monuments of downtown Rome was indicating a significant concentration on alluvial recent deposits. This result was confirmed by the geographical distribution of conservation and retrofitting activities subsequent to main earthquakes, mostly related to local geological conditions. The cases of Marcus Aurelius' Column and Colosseum confirmed the influence of the Holocene alluvial network in local seismic response. During 2500 years of history, the monuments of Rome have `memorized' the seismic effects of historical earthquakes. In some cases, the integration of historical and geological research and macroseismic observations may provide original and useful indications to seismologists to define the seismic response of the city. Local site effects represent a serious threat for historical buildings in Rome and in other historical towns with similar cultural heritage and geological characteristics, as in the Mediterranean region, even in areas that are not affected by a local seismic activity.
Seismic waveform sensitivity to global boundary topography
NASA Astrophysics Data System (ADS)
Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico
2012-09-01
We investigate the implications of lateral variations in the topography of global seismic discontinuities, in the framework of high-resolution forward modelling and seismic imaging. We run 3-D wave-propagation simulations accurate at periods of 10 s and longer, with Earth models including core-mantle boundary topography anomalies of ˜1000 km spatial wavelength and up to 10 km height. We obtain very different waveform signatures for PcP (reflected) and Pdiff (diffracted) phases, supporting the theoretical expectation that the latter are sensitive primarily to large-scale structure, whereas the former only to small scale, where large and small are relative to the frequency. PcP at 10 s seems to be well suited to map such a small-scale perturbation, whereas Pdiff at the same frequency carries faint signatures that do not allow any tomographic reconstruction. Only at higher frequency, the signature becomes stronger. We present a new algorithm to compute sensitivity kernels relating seismic traveltimes (measured by cross-correlation of observed and theoretical seismograms) to the topography of seismic discontinuities at any depth in the Earth using full 3-D wave propagation. Calculation of accurate finite-frequency sensitivity kernels is notoriously expensive, but we reduce computational costs drastically by limiting ourselves to spherically symmetric reference models, and exploiting the axial symmetry of the resulting propagating wavefield that collapses to a 2-D numerical domain. We compute and analyse a suite of kernels for upper and lower mantle discontinuities that can be used for finite-frequency waveform inversion. The PcP and Pdiff sensitivity footprints are in good agreement with the result obtained cross-correlating perturbed and unperturbed seismogram, validating our approach against full 3-D modelling to invert for such structures.
NASA Astrophysics Data System (ADS)
Cobden, L. J.
2017-12-01
Mineral physics provides the essential link between seismic observations of the Earth's interior, and laboratory (or computer-simulated) measurements of rock properties. In this presentation I will outline the procedure for quantitative conversion from thermochemical structure to seismic structure (and vice versa) using the latest datasets from seismology and mineralogy. I will show examples of how this method can allow us to infer major chemical and dynamic properties of the deep mantle. I will also indicate where uncertainties and limitations in the data require us to exercise caution, in order not to "over-interpret" seismic observations. Understanding and modelling these uncertainties serves as a useful guide for mineralogists to ascertain which mineral parameters are most useful in seismic interpretation, and enables seismologists to optimise their data assembly and inversions for quantitative interpretations.
Far-Field Effects of Large Earthquakes on South Florida's Confined Aquifer
NASA Astrophysics Data System (ADS)
Voss, N. K.; Wdowinski, S.
2012-12-01
The similarity between a seismometer and a well hydraulic head record during the passage of a seismic wave has long been documented. This is true even at large distances from earthquake epicenters. South Florida lacks a dense seismic array but does contain a comparably dense network of monitoring wells. The large spatial distribution of deep monitoring wells in South Florida provides an opportunity to study the variance of aquifer response to the passage of seismic waves. We conducted a preliminary study of hydraulic head data, provided by the South Florida Water Management District, from 9 deep wells in South Florida's confined Floridian Aquifer in response to 27 main shock events (January 2010- April 2012) with magnitude 6.9 or greater. Coseismic hydraulic head response was observed in 7 of the 27 events. In order to determine what governs aquifer response to seismic events, earthquake parameters were compared for the 7 positive events. Seismic energy density (SED), an empirical relationship between distance and magnitude, was also used to compare the relative energy between the events at each well site. SED is commonly used as a parameter for establishing thresholds for hydrologic events in the near and intermediate fields. Our analysis yielded a threshold SED for well response in South Florida as 8 x 10-3 J m-3, which is consistent with other studies. Deep earthquakes, with SED above this threshold, did not appear to trigger hydraulic head oscillations. The amplitude of hydraulic head oscillations had no discernable relationship to SED levels. Preliminary results indicate a need for a modification of the SED equation to better accommodate depth in order to be of use in the study of hydrologic response in the far field. We plan to conduct a more comprehensive study incorporating a larger subset (~60) of wells in South Florida in order to further examine the spatial variance of aquifers to the passing of seismic waves as well as better confine the relationship between earthquake depth and aquifer response.
Computers at the Albuquerque Seismological Laboratory
Hoffman, J.
1979-01-01
The Worldwide Standardized Seismograph Network (WWSSN) is managed by the U.S Geological Survey in Albuquerque, N. Mex. It consists of a global network of seismographs housed in seismic observatories throughout the world. An important recent addition to this network are the Seismic Research Observatories (SRO) which combine a borehole seismometer with a modern digital data recording system.
McNamara, Daniel E.; Stephenson, William J.; Odum, Jackson K.; Williams, Robert; Gee, Lind
2014-01-01
Earthquake damage is often increased due to local ground-motion amplification caused by soft soils, thick basin sediments, topographic effects, and liquefaction. A critical factor contributing to the assessment of seismic hazard is detailed information on local site response. In order to address and quantify the site response at seismograph stations in the eastern United States, we investigate the regional spatial variation of horizontal:vertical spectral ratios (HVSR) using ambient noise recorded at permanent regional and national network stations as well as temporary seismic stations deployed in order to record aftershocks of the 2011 Mineral, Virginia, earthquake. We compare the HVSR peak frequency to surface measurements of the shear-wave seismic velocity to 30 m depth (Vs30) at 21 seismograph stations in the eastern United States and find that HVSR peak frequency increases with increasing Vs30. We use this relationship to estimate the National Earthquake Hazards Reduction Program soil class at 218 ANSS (Advanced National Seismic System), GSN (Global Seismographic Network), and RSN (Regional Seismograph Networks) locations in the eastern United States, and suggest that this seismic station–based HVSR proxy could potentially be used to calibrate other site response characterization methods commonly used to estimate shaking hazard.
Effect of Different Groundwater Levels on Seismic Dynamic Response and Failure Mode of Sandy Slope
Huang, Shuai; Lv, Yuejun; Peng, Yanju; Zhang, Lifang; Xiu, Liwei
2015-01-01
Heavy seismic damage tends to occur in slopes when groundwater is present. The main objectives of this paper are to determine the dynamic response and failure mode of sandy slope subjected simultaneously to seismic forces and variable groundwater conditions. This paper applies the finite element method, which is a fast and efficient design tool in modern engineering analysis, to evaluate dynamic response of the slope subjected simultaneously to seismic forces and variable groundwater conditions. Shaking table test is conducted to analyze the failure mode and verify the accuracy of the finite element method results. The research results show that dynamic response values of the slope have different variation rules under near and far field earthquakes. And the damage location and pattern of the slope are different in varying groundwater conditions. The destruction starts at the top of the slope when the slope is in no groundwater, which shows that the slope appears obvious whipping effect under the earthquake. The destruction starts at the toe of the slope when the slope is in the high groundwater levels. Meanwhile, the top of the slope shows obvious seismic subsidence phenomenon after earthquake. Furthermore, the existence of the groundwater has a certain effect of damping. PMID:26560103
Seismic multiplet response triggered by melt at Blood Falls, Taylor Glacier, Antarctica
NASA Astrophysics Data System (ADS)
Carmichael, Joshua D.; Pettit, Erin C.; Hoffman, Matt; Fountain, Andrew; Hallet, Bernard
2012-09-01
Meltwater input often triggers a seismic response from glaciers and ice sheets. It is difficult, however, to measure melt production on glaciers directly, while subglacial water storage is not directly observable. Therefore, we document temporal changes in seismicity from a dry-based polar glacier (Taylor Glacier, Antarctica) during a melt season using a synthesis of seismic observation and melt modeling. We record icequakes using a dense six-receiver network of three-component geophones and compare this with melt input generated from a calibrated surface energy balance model. In the absence of modeled surface melt, we find that seismicity is well-described by a diurnal signal composed of microseismic events in lake and glacial ice. During melt events, the diurnal signal is suppressed and seismicity is instead characterized by large glacial icequakes. We perform network-based correlation and clustering analyses of seismic record sections and determine that 18% of melt-season icequakes are repetitive (multiplets). The epicentral locations for these multiplets suggest that they are triggered by meltwater produced near a brine seep known as Blood Falls. Our observations of the correspondingp-wave first motions are consistent with volumetric source mechanisms. We suggest that surface melt enables a persistent pathway through this cold ice to an englacial fracture system that is responsible for brine release episodes from the Blood Falls seep. The scalar moments for these events suggest that the volumetric increase at the source region can be explained by melt input.
Earthquake chemical precursors in groundwater: a review
NASA Astrophysics Data System (ADS)
Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.
2018-03-01
We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.
Gardine, M.; West, M.; Werner, C.; Doukas, M.
2011-01-01
On September 17th, 2006, Fourpeaked volcano had a widely-observed phreatic eruption. At the time, Fourpeaked was an unmonitored volcano with no known Holocene activity, based on limited field work. Airborne gas sampling began within days of the eruption and a modest seismic network was installed in stages. Vigorous steaming continued for months; however, there were no further eruptions similar in scale to the September 17 event. This eruption was followed by several months of sustained seismicity punctuated by vigorous swarms, and SO2 emissions exceeding a thousand tons/day. Based on observations during and after the phreatic eruption, and assuming no recent pre-historical eruptive activity at Fourpeaked, we propose that the activity was caused by a minor injection of new magma at or near 5km depth beneath Fourpeaked, which remained active over several months as this magma equilibrated into the crust. By early 2007 declining seismicity and SO2 emission signaled the end of unrest. Because the Fourpeaked seismic network was installed in stages and the seismicity was punctuated by discrete swarms, we use Fourpeaked to illustrate quantitatively the efficacy and shortcomings of rapid response seismic networks for tracking volcanic earthquakes.
NASA Astrophysics Data System (ADS)
Ghorbanirenani, Iman
This thesis presents two experimental programs together with companion numerical studies that were carried out on reinforced concrete shear walls: static tests and dynamic (shake table) tests. The first series of experiments were monotonic and cyclic quasi-static testing on ductile reinforced concrete shear wall specimens designed and detailed according to the seismic provisions of NBCC 2005 and CSA-A23.3-04 standard. The tests were carried out on full-scale and 1:2.37 reduced scale wall specimens to evaluate the seismic design provisions and similitude law and determine the appropriate scaling factor that could be applied for further studies such as dynamic tests. The second series of experiments were shake table tests conducted on two identical 1:2.33 scaled, 8-storey moderately ductile reinforced concrete shear wall specimens to investigate the effects of higher modes on the inelastic response of slender walls under high frequency ground motions expected in Eastern North America. The walls were designed and detailed according to the seismic provisions of NBCC 2005 and CSA-A23.3-04 standard. The objectives were to validate and understand the inelastic response and interaction of shear, flexure and axial loads in plastic hinge zones of the walls considering the higher mode effects and to investigate the formation of second hinge in upper part of the wall due to higher mode responses. Second mode response significantly affected the response of the walls. This caused inelastic flexural response to develop at the 6th level with approximately the same rotation ductility compared to that observed at the base. Dynamic amplification of the base shear forces was also observed in both walls. Numerical modeling of these two shake table tests was performed to evaluate the test results and validate current modeling approaches. Nonlinear time history analyses were carried out by the reinforced concrete fibre element (OpenSees program) and finite element (VecTor2 program) methods using the shake table feedback signals as input. Good agreement was generally obtained between numerical and experimental results. Both computer programs were able to predict the natural frequency of the walls in the undamaged and damaged conditions. Both modeling techniques could predict that the maximum bending moment at the base of the walls reached the actual wall moment capacity. The inelastic response and the dual plastic hinge behaviour of the walls could be adequately reproduced using the fibre element and finite element analysis programs. The fibre element method is a good alternative in terms of computing time. It produces reasonable results in comparison with the finite element method, although particular attention needs to be given to the selection of the damping ratios. The different parametric analyses performed in this thesis showed that, for both models, adding a small amount of global viscous damping in combination with a refined reinforced concrete hysteretic model could predict better the seismic behaviour of the tested structures. For the VecTor2 program, a viscous damping of 1% led to reasonable results for the studied RC walls. For the OpenSees program, 2% damping resulted in a good match between test and predictions for the 100% EQ test on the initially undamaged wall. When increasing the earthquake intensities, the damping had to be reduced between 1.5% and 1% to achieve good results for a damaged wall with elongated vibration periods. According to the experimental results and numerical analyses on reinforced concrete shear walls subjected to ground motions from Eastern North America earthquakes, there is a high possibility of having a second plastic hinge forming in the upper part of walls in addition to the one assumed in design at the base. This second hinge could dissipate the earthquake energy more effectively and decrease the force demand on the wall. A dual plastic hinge design approach in which the structures become plastic in the upper wall segment as well as the base could be therefore more appropriate. Preliminary design recommendations considering higher mode effects on dual hinge response and base shear forces for ductile slender shear walls are given in this thesis. (Abstract shortened by UMI.)
Application of seismic-refraction techniques to hydrologic studies
Haeni, F.P.
1986-01-01
During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations, and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high seismic-velocity surfaces, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits,are ideally suited for applying seismic-refraction methods. These methods allow the economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies.This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of this technique in hydrologic investigations and describes the planning, equipment, field procedures, and intrepretation techniques needed for this type of study.Examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.
Application of seismic-refraction techniques to hydrologic studies
Haeni, F.P.
1988-01-01
During the past 30 years, seismic-refraction methods have been used extensively in petroleum, mineral, and engineering investigations and to some extent for hydrologic applications. Recent advances in equipment, sound sources, and computer interpretation techniques make seismic refraction a highly effective and economical means of obtaining subsurface data in hydrologic studies. Aquifers that can be defined by one or more high-seismic-velocity surface, such as (1) alluvial or glacial deposits in consolidated rock valleys, (2) limestone or sandstone underlain by metamorphic or igneous rock, or (3) saturated unconsolidated deposits overlain by unsaturated unconsolidated deposits, are ideally suited for seismic-refraction methods. These methods allow economical collection of subsurface data, provide the basis for more efficient collection of data by test drilling or aquifer tests, and result in improved hydrologic studies. This manual briefly reviews the basics of seismic-refraction theory and principles. It emphasizes the use of these techniques in hydrologic investigations and describes the planning, equipment, field procedures, and interpretation techniques needed for this type of study. Further-more, examples of the use of seismic-refraction techniques in a wide variety of hydrologic studies are presented.
NASA Astrophysics Data System (ADS)
Kim, Jongchan; Archer, Rosalind
2017-04-01
In terms of energy development (oil, gas and geothermal field) and environmental improvement (carbon dioxide sequestration), fluid injection into subsurface has been dramatically increased. As a side effect of these operations, a number of injection-induced seismic activities have also significantly risen. It is known that the main causes of induced seismicity are changes in local shear and normal stresses and pore pressure as well. This mechanism leads to increase in the probability of earthquake occurrence on permeable pre-existing fault zones predominantly. In this 2D fully coupled THM geothermal reservoir numerical simulation of injection-induced seismicity, we investigate the thermal, hydraulic and mechanical behavior of the fracture zone, considering a variety of 1) fault permeability, 2) injection rate and 3) injection temperature to identify major contributing parameters to induced seismic activity. We also calculate spatiotemporal variation of the Coulomb stress which is a combination of shear stress, normal stress and pore pressure and lastly forecast the seismicity rate on the fault zone by computing the seismic prediction model of Dieterich (1994).
NASA Astrophysics Data System (ADS)
Tian, Jingjing
Low-rise woodframe buildings with disproportionately flexible ground stories represent a significant percentage of the building stock in seismically vulnerable communities in the Western United States. These structures have a readily identifiable structural weakness at the ground level due to an asymmetric distribution of large openings in the perimeter wall lines and to a lack of interior partition walls, resulting in a soft story condition that makes the structure highly susceptible to severe damage or collapse under design-level earthquakes. The conventional approach to retrofitting such structures is to increase the ground story stiffness. An alternate approach is to increase the energy dissipation capacity of the structure via the incorporation of supplemental energy dissipation devices (dampers), thereby relieving the energy dissipation demands on the framing system. Such a retrofit approach is consistent with a Performance-Based Seismic Retrofit (PBSR) philosophy through which multiple performance levels may be targeted. The effectiveness of such a retrofit is presented via examination of the seismic response of a full-scale four-story building that was tested on the outdoor shake table at NEES-UCSD and a full-scale three-story building that was tested using slow pseudo-dynamic hybrid testing at NEES-UB. In addition, a Direct Displacement Design (DDD) methodology was developed as an improvement over current DDD methods by considering torsion, with or without the implementation of damping devices, in an attempt to avoid the computational expense of nonlinear time-history analysis (NLTHA) and thus facilitating widespread application of PBSR in engineering practice.
Response of a Circular Tunnel Through Rock to a Harmonic Rayleigh Wave
NASA Astrophysics Data System (ADS)
Kung, Chien-Lun; Wang, Tai-Tien; Chen, Cheng-Hsun; Huang, Tsan-Hwei
2018-02-01
A factor that combines tunnel depth and incident wavelength has been numerically determined to dominate the seismic responses of a tunnel in rocks that are subjected to harmonic P- and S-waves. This study applies the dynamic finite element method to investigate the seismic response of shallow overburden tunnels. Seismically induced stress increments in the lining of a circular tunnel that is subjected to an incident harmonic R-wave are examined. The determination of R-wave considers the dominant frequency of acceleration history of the 1999 Chi-Chi earthquake measured near the site with damage to two case tunnels at specifically shallow depth. An analysis reveals that the normalized seismically induced axial, shear and flexural stress increments in the lining of a tunnel reach their respective peaks at the depth h/ λ = 0.15, where the ground motion that is generated by an incident of R-wave has its maximum. The tunnel radius has a stronger effect on seismically induced stress increments than does tunnel depth. A greater tunnel radius yields higher normalized seismically induced axial stress increments and lower normalized seismically induced shear and flexural stress increments. The inertia of the thin overburden layer above the tunnel impedes the propagation of the wave and affects the motion of the ground around the tunnel. With an extremely shallow overburden, such an effect can change the envelope of the normalized seismically induced stress increments from one with a symmetric four-petal pattern into one with a non-symmetric three-petal pattern. The simulated results may partially elucidate the spatial distributions of cracks that were observed in the lining of the case tunnels.
Classifying elephant behaviour through seismic vibrations.
Mortimer, Beth; Rees, William Lake; Koelemeijer, Paula; Nissen-Meyer, Tarje
2018-05-07
Seismic waves - vibrations within and along the Earth's surface - are ubiquitous sources of information. During propagation, physical factors can obscure information transfer via vibrations and influence propagation range [1]. Here, we explore how terrain type and background seismic noise influence the propagation of seismic vibrations generated by African elephants. In Kenya, we recorded the ground-based vibrations of different wild elephant behaviours, such as locomotion and infrasonic vocalisations [2], as well as natural and anthropogenic seismic noise. We employed techniques from seismology to transform the geophone recordings into source functions - the time-varying seismic signature generated at the source. We used computer modelling to constrain the propagation ranges of elephant seismic vibrations for different terrains and noise levels. Behaviours that generate a high force on a sandy terrain with low noise propagate the furthest, over the kilometre scale. Our modelling also predicts that specific elephant behaviours can be distinguished and monitored over a range of propagation distances and noise levels. We conclude that seismic cues have considerable potential for both behavioural classification and remote monitoring of wildlife. In particular, classifying the seismic signatures of specific behaviours of large mammals remotely in real time, such as elephant running, could inform on poaching threats. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pattern Informatics Approach to Earthquake Forecasting in 3D
NASA Astrophysics Data System (ADS)
Toya, Y.; Tiampo, K. F.; Rundle, J. B.; Chen, C.; Li, H.; Klein, W.
2009-05-01
Natural seismicity is correlated across multiple spatial and temporal scales, but correlations in seismicity prior to a large earthquake are locally subtle (e.g. seismic quiescence) and often prominent in broad scale (e.g., seismic activation), resulting in local and regional seismicity patterns, e.g. a Mogi's donut. Recognizing that patterns in seismicity rate are reflecting the regional dynamics of the directly unobservable crustal stresses, the Pattern Informatics (PI) approach was introduced by Tiampo et al. in 2002 [Europhys. Lett., 60 (3), 481-487,] Rundle et al., 2002 [PNAS 99, suppl. 1, 2514-2521.] In this study, we expand the PI approach to forecasting earthquakes into the third, or vertical dimension, and illustrate its further improvement in the forecasting performance through case studies of both natural and synthetic data. The PI characterizes rapidly evolving spatio-temporal seismicity patterns as angular drifts of a unit state vector in a high dimensional correlation space, and systematically identifies anomalous shifts in seismic activity with respect to the regional background. 3D PI analysis is particularly advantageous over 2D analysis in resolving vertically overlapped seismicity anomalies in a highly complex tectonic environment. Case studies will help to illustrate some important properties of the PI forecasting tool. [Submitted to: Concurrency and Computation: Practice and Experience, Wiley, Special Issue: ACES2008.
NASA Astrophysics Data System (ADS)
Liang, Li; Takaaki, Ohkubo; Guang-hui, Li
2018-03-01
In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Broad-band seismic analysis and modeling of the 2015 Taan Fjord, Alaska landslide using Instaseis
NASA Astrophysics Data System (ADS)
Gualtieri, Lucia; Ekström, Göran
2018-06-01
We carry out a broad-band analysis of the seismic signals generated by a massive landslide that occurred near Icy Bay (Alaska) on 2015 October 17. The event generated seismic signals recorded globally. Using Instaseis, a recently developed tool for rapid computation of complete broad-band synthetic seismograms, we simulate the seismic wave propagation between the event and five seismic stations located around the landslide. By modeling the broad-band seismograms in the period band 5-200 s, we reconstruct by inversion a time-varying point force to characterize the landslide time history. We compute the broad-band spectrum of the landslide force history and find that it has a corner period of about 100 s, corresponding to the duration of sliding. In contrast with standard earthquakes, the landslide force spectrum below the corner frequency decays as ω, while the spectral amplitudes at higher frequencies is proportional to ω-2, similar to the rate of spectral decay seen in earthquakes. From the inverted force history and an estimate of the final run-out distance, we deduce the mass, the trajectory and characteristics of the landslide dynamics associated with the centre of mass, such as acceleration, velocity, displacement and friction. Inferring an effective run-out distance of ˜900 m from a satellite image, we estimate a landslide mass of ˜150 million metric tons.
NASA Astrophysics Data System (ADS)
Duval, A.; Bertrand, E.; Régnier, J.; Grasso, E.; Gance, J.; Glinsky, N.; Semblat, J.
2009-12-01
One of the strongest historical earthquakes in France metropolitan territory occurred in 1909, in Provence, south of France. In the eighties, a scenario study predicted that a similar earthquake may lead to more than the 46 deaths of 1909 and a tremendous economical cost caused by increasing urbanisation in this area. The 1909 maximal intensity was estimated at IX. But a lot of municipalities exhibited strong variations in damage distribution. For some of them, like Rognes and Vernègues, the historical perched village suffered more damage than constructions built on the flat part of the territories. While seismologists realised site effect importance in earthquakes, this 1909 damage distribution became the most famous french illustration of topographic site effect. But if ray theory explains that relief can indubitably focus waves and amplify seismic signal for specific wavelength according to the location on the slope, some doubts remain about the real impact of topographic effects in 1909 damage distribution. It may also be related to the fact that the different types of building were not uniformly spread on the territories and/or that the old structures were more vulnerable than new ones. Finally, was the seismic signal really different along the relief during 1909 earthquake ? Trying to solve this question, several field campaigns were conducted on the village of Rognes. The first one consisted in measuring microtremors on several points and computing H/V ratios (Nogoshi, 1970, Nakamura, 1989). The H/V curves on flat part of the territory do not exhibit any clear peak except for one site on the north where a high frequency peak should be relative to a superficial and thin soft layer. On the contrary, the H/V curves obtained on the top of the relief show a high peak around 1 Hertz. We then decided to install 9 seismic stations to record continuously seismicity at key-points of the relief. The seismicity rate is very low in this region, but the 2 years of operation allow to compare recordings of local, regional and teleseismic events both in time and spectral domains. The first analyse confirms the importance of signal amplification on the top of the relief. Now, numerical simulations are conducted to confirm this amplification and the concerned frequency range. It should precise the repartition of the amplification along a 2D profile. These simulations are conducted simultaneously with different techniques (Finite Elements, Fast Multipole method, Discontinuous Galerkin Finite Elements). These numerical results, confirmed by experimental results, should latter drive to canonical models of seismic response for different relief. They may be directly used to predict ground motion along hills, which is an essential task for seismic risk mitigation, particularly in south France.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike
2013-01-01
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
Quantitative modeling of reservoir-triggered seismicity
NASA Astrophysics Data System (ADS)
Hainzl, S.; Catalli, F.; Dahm, T.; Heinicke, J.; Woith, H.
2017-12-01
Reservoir-triggered seismicity might occur as the response to the crustal stress caused by the poroelastic response to the weight of the water volume and fluid diffusion. Several cases of high correlations have been found in the past decades. However, crustal stresses might be altered by many other processes such as continuous tectonic stressing and coseismic stress changes. Because reservoir-triggered stresses decay quickly with distance, even tidal or rainfall-triggered stresses might be of similar size at depth. To account for simultaneous stress sources in a physically meaningful way, we apply a seismicity model based on calculated stress changes in the crust and laboratory-derived friction laws. Based on the observed seismicity, the model parameters can be determined by maximum likelihood method. The model leads to quantitative predictions of the variations of seismicity rate in space and time which can be used for hypothesis testing and forecasting. For case studies in Talala (India), Val d'Agri (Italy) and Novy Kostel (Czech Republic), we show the comparison of predicted and observed seismicity, demonstrating the potential and limitations of the approach.
Comparing Low-Frequency Earthquakes During Triggered and Ambient Tremor in Taiwan
NASA Astrophysics Data System (ADS)
Alvarado Lara, F., Sr.; Ledezma, C., Sr.
2014-12-01
In South America, larger magnitude seismic events originate in the subduction zone between the Nazca and Continental plates, as opposed to crustal events. Crustal seismic events are important in areas very close to active fault lines; however, seismic hazard analyses incorporate crust events related to a maximum distance from the site under study. In order to use crustal events as part of a seismic hazard analysis, it is necessary to use the attenuation relationships which represent the seismic behavior of the site under study. Unfortunately, in South America the amount of compiled crustal event historical data is not yet sufficient to generate a firm regional attenuation relationship. In the absence of attenuation relationships for crustal earthquakes in the region, the conventional approach is to use attenuation relationships from other regions which have a large amount of compiled data and which have similar seismic conditions to the site under study. This practice permits the development of seismic hazard analysis work with a certain margin of accuracy. In South America, in the engineering practice, new generation attenuation relationships (NGA-W) are used among other alternatives in order to incorporate the effect of crustal events in a seismic hazard analysis. In 2014, the NGA-W Version 2 (NGA-W2) was presented with a database containing information from Taiwan, Turkey, Iran, USA, Mexico, Japan, and Alaska. This paper examines whether it is acceptable to utilize the NGA-W2 in seismic hazard analysis in South America. A comparison between response spectrums of the seismic risk prepared in accordance with NGA-W2 and actual response spectrums of crustal events from Argentina is developed in order to support the examination. The seismic data were gathered from equipment installed in the cities of Santiago, Chile and Mendoza, Argentina.
Revision of the Applicability of the NGA's in South America, Chile - Argentina.
NASA Astrophysics Data System (ADS)
Alvarado Lara, F., Sr.; Ledezma, C., Sr.
2015-12-01
In South America, larger magnitude seismic events originate in the subduction zone between the Nazca and Continental plates, as opposed to crustal events. Crustal seismic events are important in areas very close to active fault lines; however, seismic hazard analyses incorporate crust events related to a maximum distance from the site under study. In order to use crustal events as part of a seismic hazard analysis, it is necessary to use the attenuation relationships which represent the seismic behavior of the site under study. Unfortunately, in South America the amount of compiled crustal event historical data is not yet sufficient to generate a firm regional attenuation relationship. In the absence of attenuation relationships for crustal earthquakes in the region, the conventional approach is to use attenuation relationships from other regions which have a large amount of compiled data and which have similar seismic conditions to the site under study. This practice permits the development of seismic hazard analysis work with a certain margin of accuracy. In South America, in the engineering practice, new generation attenuation relationships (NGA-W) are used among other alternatives in order to incorporate the effect of crustal events in a seismic hazard analysis. In 2014, the NGA-W Version 2 (NGA-W2) was presented with a database containing information from Taiwan, Turkey, Iran, USA, Mexico, Japan, and Alaska. This paper examines whether it is acceptable to utilize the NGA-W2 in seismic hazard analysis in South America. A comparison between response spectrums of the seismic risk prepared in accordance with NGA-W2 and actual response spectrums of crustal events from Argentina is developed in order to support the examination. The seismic data were gathered from equipment installed in the cities of Santiago, Chile and Mendoza, Argentina.
A computer program to trace seismic ray distribution in complex two-dimensional geological models
Yacoub, Nazieh K.; Scott, James H.
1970-01-01
A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.
Retrieval of P wave Basin Response from Autocorrelation of Seismic Noise-Jakarta, Indonesia
NASA Astrophysics Data System (ADS)
Saygin, E.; Cummins, P. R.; Lumley, D. E.
2016-12-01
Indonesia's capital city, Jakarta, is home to a very large (over 10 million), vulnerable population and is proximate to known active faults, as well as to the subduction of Australian plate, which has a megathrust at abut 300 km distance, as well as intraslab seismicity extending to directly beneath the city. It is also located in a basin filled with a thick layer of unconsolidated and poorly consolidated sediment, which increases the seismic hazard the city is facing. Therefore, the information on the seismic velocity structure of the basin is crucial for increasing our knowledge of the seismic risk. We undertook a passive deployment of broadband seismographs throughout the city over a 3-month interval in 2013-2014, recording ambient seismic noise at over 90 sites for intervals of 1 month or more. Here we consider autocorrelations of the vertical component of the continuously recorded seismic wavefield across this dense network to image the shallow P wave velocity structure of Jakarta, Indonesia. Unlike the surface wave Green's functions used in ambient noise tomography, the vertical-component autocorrelograms are dominated by body wave energy that is potentially sensitive to sharp velocity contrasts, which makes them useful in seismic imaging. Results show autocorrelograms at different seismic stations with travel time variations that largely reflect changes in sediment thickness across the basin. We also confirm the validity our interpretation of the observed autocorrelation waveforms by conducting 2D finite difference full waveform numerical modeling for randomly distributed seismic sources to retrieve the reflection response through autocorrelation.
Dynamic response analysis of surrounding rock under the continuous blasting seismic wave
NASA Astrophysics Data System (ADS)
Gao, P. F.; Zong, Q.; Xu, Y.; Fu, J.
2017-10-01
The blasting vibration that is caused by blasting excavation will generate a certain degree of negative effect on the stability of surrounding rock in underground engineering. A dynamic response analysis of surrounding rock under the continuous blasting seismic wave is carried out to optimize blasting parameters and guide underground engineering construction. Based on the theory of wavelet analysis, the reconstructed signals of each layer of different frequency bands are obtained by db8 wavelet decomposition. The difference of dynamic response of the continuous blasting seismic wave at a certain point caused by different blasting sources is discussed. The signal in the frequency band of natural frequency of the surrounding rock shows a certain degree of amplification effect deduced from the dynamic response characteristics of the surrounding rock under the influence of continuous blasting seismic wave. Continuous blasting operations in a fixed space will lead to the change of internal structure of the surrounding rock. It may result in the decline of natural frequency of the whole surrounding rock and it is also harmful for the stability of the surrounding rock.
NASA Astrophysics Data System (ADS)
Wu, W.; Zhu, J. B.; Zhao, J.
2013-02-01
The purpose of this study is to further investigate the seismic response of a set of parallel rock fractures filled with viscoelastic materials, following the work by Zhu et al. Dry quartz sands are used to represent the viscoelastic materials. The split Hopkinson rock bar (SHRB) technique is modified to simulate 1-D P-wave propagation across the sand-filled parallel fractures. At first, the displacement and stress discontinuity model (DSDM) describes the seismic response of a sand-filled single fracture. The modified recursive method (MRM) then predicts the seismic response of the sand-filled parallel fractures. The SHRB tests verify the theoretical predictions by DSDM for the sand-filled single fracture and by MRM for the sand-filled parallel fractures. The filling sands cause stress discontinuity across the fractures and promote displacement discontinuity. The wave transmission coefficient for the sand-filled parallel fractures depends on wave superposition between the fractures, which is similar to the effect of fracture spacing on the wave transmission coefficient for the non-filled parallel fractures.
NASA Astrophysics Data System (ADS)
Inbal, A.; Ampuero, J. P.; Avouac, J.; Lengliné, O.; Helmberger, D. V.
2012-12-01
The March 11, 2011 M9.0 Tohoku-Oki earthquake was recorded by dense seismological and geodetical networks deployed in Japan, as well as by a vast number of seismic stations worldwide. These observations allow us to study the properties of the subduction interface with unprecedented accuracy and resolution. Here we examine the spectral tails of the co- and post-seismic stages using local geodetic and seismological recordings. First, we study the details of high-frequency (HF) energy radiation during the rupture by using strong-motion recordings. Second, we jointly invert 1Hz GPS, ocean-bottom GPS and aftershock data for the spatio-temporal distribution of early afterslip. In order to constrain the spatial distribution of HF radiators we model waveform envelopes recorded by Kik-net borehole accelerometers located in northeastern Japan. We compute theoretical envelopes for waves traveling in a heterogeneous scattering medium, and invert for the location and amplitude of energy radiators for frequencies ranging from 1 to 16 Hz. Because the inversion is extremely sensitive to the response of individual sites, we adopt an empirical approach and iteratively separate the source and site terms from the stacked spectra of numerous events recorded by the network. The output response functions for each site are used to stabilize the inversion. Preliminary results are consistent with far-field observations and suggest that the HF energy emitted during the M9.0 event originated at the down-dip limit of the rupture zone. We apply waveform cross-correlation to identify repeating events within the aftershock sequence, and locate them by match-filtering their waveforms with known templates. Many of these events occur on seismic asperities loaded by the surrounding creep. We jointly invert the slip histories on these fault patches and the available GPS data for the spatio-temporal distribution of afterslip during the first few hours following the mainshock. We use the Principal Component Analysis Inversion Method to determine the time history of slip on the megathrust during seismic slip and aseismic afterslip. The eigenfunctions are constrained in an iterative process that incorporates the slip histories of seismic asperities. This approach allows documenting the seismic and aseismic phases in a self-consistent manner. The GPS-only inversion places most of the early afterslip east of the hypocenter up to the trench, an area that seemed to have undergone dynamic overshoot.
An Expedient but Fascinating Geophysical Chimera: The Pinyon Flat Seismic Strain Point Array
NASA Astrophysics Data System (ADS)
Langston, C. A.
2016-12-01
The combination of a borehole Gladwin Tensor Strain Meter (GTSM) and a co-located three component broadband seismometer (BB) can theoretically be used to determine the propagation attributes of P-SV waves in vertically inhomogeneous media such as horizontal phase velocity and azimuth of propagation through application of wave gradiometry. A major requirement for this to be successful is to have well-calibrated strain and seismic sensors to be able to rely on using absolute wave amplitude from both systems. A "point" seismic array is constructed using the PBO GTSM station B084 and co-located BB seismic stations from an open array experiment deployed by UCSD as well as PFO station at the Pinyon Flat facility. Site amplitude statics for all three ground motion components are found for the 14-element (13 PY stations + PFO), small aperture seismic array using data from 47 teleseisms recorded from 2014 until present. Precision of amplitude measurement at each site is better than 0.2% for vertical components, 0.5% for EW components, and 1% for NS components. Relative amplitudes among sites of the array are often better than 1% attesting to the high quality of the instrumentation and installation. The wavefield and related horizontal strains are computed for the location of B084 using a second order Taylor's expansion of observed waveforms from moderate ( M4) regional events. The computed seismic array areal, differential, and shear strains show excellent correlation in both phase and amplitude with those recorded by B084 when using the calibration matrix previously determined using teleseismic strains from the entire ANZA seismic network. Use of the GTSM-BB "point" array significantly extends the bandwidth of gradiometry calculations over the small-aperture seismic array by nearly two orders of magnitude from 0.5 Hz to 0.01 Hz. In principle, a seismic strain point array could be constructed from every PBO GTSM with a co-located seismometer to help serve earthquake early warning for large regional events on North America's west coast.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Fu, Li-Yun; Ma, Yuchuan; Hu, Junhua
2016-11-01
Zuojiazhuang and Baodi are two adjacent wells ( 50 km apart) in northern China. The large 2008 M w 7.9 Wenchuan and 2011 M w 9.1 Tohoku earthquakes induced different co-seismic water-level responses in these far-field (>1000 km) wells. The co-seismic water-level changes in the Zuojiazhuang well exhibited large amplitudes ( 2 m), whereas those in the Baodi well were small and unclear ( 0.05 m). The mechanism of the different co-seismic hydraulic responses in the two wells needs to be revealed. In this study, we used the barometric responses in different frequency domains and the phase shifts and amplitude ratios of the tidal responses (M2 wave), together with the well logs, to explain this inconformity. Our calculations show that the co-seismic phase shifts of the M2 wave decreased or remained unchanged in the Baodi well, which was quite different from the Zuojiazhuang well and from the commonly accepted phenomena. According to the well logs, the lithology of the Baodi well is characterized by the presence of a significant amount of shale. The low porosity/permeability of shale in the Baodi well could be the cause for the unchanged and decreased phase shifts and tiny co-seismic water-level responses. In addition, shale is one of the causes of positive phase shifts and indicates a vertical water-level flow, which may be due to a semi-confined aquifer or the complex and anisotropic fracturing of shale.
Properties of the seismic nucleation phase
Beroza, G.C.; Ellsworth, W.L.
1996-01-01
Near-source observations show that earthquakes begin abruptly at the P-wave arrival, but that this beginning is weak, with a low moment rate relative to the rest of the main shock. We term this initial phase of low moment rate the seismic nucleation phase. We have observed the seismic nucleation phase for a set of 48 earthquakes ranging in magnitude from 1.1-8.1. The size and duration of the seismic nucleation phase scale with the total seismic moment of the earthquake, suggesting that the process responsible for the seismic nucleation phase carries information about the eventual size of the earthquake. The seismic nucleation phase is characteristically followed by quadratic growth in the moment rate, consistent with self-similar rupture at constant stress drop. In this paper we quantify the properties of the seismic nucleation phase and offer several possible explanations for it.
Characterizing 6 August 2007 Crandall Canyon mine collapse from ALOS PALSAR InSAR
Lu, Zhong; Wicks, Charles
2010-01-01
same as the moment of the collapse source, with each larger than the seismically computed moment. Our InSAR results, including the location of the event, the extent of the collapsed area, and constraints on the shearing component of the deformation source, all confirm and extend recent seismic studies of the 6 August 2007 event.
Possible artifacts in inferring seismic properties from X-ray data
NASA Astrophysics Data System (ADS)
Bosak, A.; Krisch, M.; Chumakov, A.; Abrikosov, I. A.; Dubrovinsky, L.
2016-11-01
We consider the experimental and computational artifacts relevant for the extraction of aggregate elastic properties of polycrystalline materials with particular emphasis on the derivation of seismic velocities. We use the case of iron as an example, and show that the improper use of definitions and neglecting the crystalline anisotropy can result in unexpectedly large errors up to a few percent.
NASA Astrophysics Data System (ADS)
Beucler, E.; Haugmard, M.; Mocquet, A.
2016-12-01
The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.
Monitoring the englacial fracture state using virtual-reflector seismology
NASA Astrophysics Data System (ADS)
Lindner, F.; Weemstra, C.; Walter, F.; Hadziioannou, C.
2017-12-01
Fracturing and changes in the englacial macroscopic water content change the elastic bulk properties of ice bodies. Small seismic velocity variations, resulting from such changes, can be measured using a technique called coda-wave interferometry. Here, coda refers to the later-arriving, multiply scattered waves. Often, this technique is applied to so-called virtual-source responses, which can be obtained using seismic interferometry (a simple crosscorrelation process). Compared to other media (e.g., the Earth's crust), however, ice bodies exhibit relatively little scattering. This complicates the application of coda-wave interferometry to the retrieved virtual-source responses. In this work, we therefore investigate the applicability of coda-wave interferometry to virtual-source responses obtained using two alternative seismic interferometric techniques, namely, seismic interferometry by multidimensional deconvolution (SI by MDD), and virtual-reflector seismology (VRS). To that end, we use synthetic data, as well as active-source glacier data acquired on Glacier de la Plaine Morte, Switzerland. Both SI by MDD and VRS allow the retrieval of more accurate virtual-source responses. In particular, the dependence of the retrieved virtual-source responses on the illumination pattern is reduced. We find that this results in more accurate glacial phase-velocity estimates. In addition, VRS introduces virtual reflections from a receiver contour (partly) enclosing the medium of interest. By acting as a sort of virtual reverberation, the coda resulting from the application of VRS significantly increases seismic monitoring capabilities, in particular in cases where natural scattering coda is not available.
NASA Astrophysics Data System (ADS)
Baddari, Kamel; Bellalem, Fouzi; Baddari, Ibtihel; Makdeche, Said
2016-10-01
Statistical tests have been used to adjust the Zemmouri seismic data using a distribution function. The Pareto law has been used and the probabilities of various expected earthquakes were computed. A mathematical expression giving the quantiles was established. The extreme values limiting law confirmed the accuracy of the adjustment method. Using the moment magnitude scale, a probabilistic model was made to predict the occurrences of strong earthquakes. The seismic structure has been characterized by the slope of the recurrence plot γ, fractal dimension D, concentration parameter K sr, Hurst exponents H r and H t. The values of D, γ, K sr, H r, and H t diminished many months before the principal seismic shock ( M = 6.9) of the studied seismoactive zone has occurred. Three stages of the deformation of the geophysical medium are manifested in the variation of the coefficient G% of the clustering of minor seismic events.
2005-05-01
CONTRACT NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Seismic Structural Considerations for the Stem and Base of Retaining Walls...as represented by response spectra are determined. Several modes of vibration are considered. The number of modes included in the analysis is that...response spectrum- modal analysis procedure. Especially important is the number of excursions beyond acceptable displacement. As with the response
NASA Astrophysics Data System (ADS)
Ren, Z.; Zhang, Z.; Zhang, H.; Zheng, W.; Zhang, P. Z.
2017-12-01
The widely held understanding that reverse-faulting earthquakes play an important role in building mountains has been challenged by recent studies suggesting that co-seismic landslides of the 2008 Mw 7.9 Wenchuan earthquake led to a net co-seismic lowering of surface height. We use precise estimates of co-seismic landslide volumes to calculate the long-term isostatic response to landsliding during the 2008 Wenchuan earthquake. The total isostatic respond volume is 2.0 km3 which did not change much associated with thickness of Te, however, the distribution of the rebound changes associated with thickness of Te. The total co-seismic mass change could be 1.8 km3. The maximum isostatic response due to Wenchuan earthquake may have been as high as 0.9 meters in the highest Pengguan massif of the central Longmen Shan. We also find that the average net uplift is 0.16 meters within the total landslide region due to the Wenchuan earthquake. Our findings suggest that the local topographic evolution of the middle Longmen Shan region is closely related to repeated tectonic events such as the 2008 Wenchuan Earthquake.
Computational sciences in the upstream oil and gas industry
Halsey, Thomas C.
2016-01-01
The predominant technical challenge of the upstream oil and gas industry has always been the fundamental uncertainty of the subsurface from which it produces hydrocarbon fluids. The subsurface can be detected remotely by, for example, seismic waves, or it can be penetrated and studied in the extremely limited vicinity of wells. Inevitably, a great deal of uncertainty remains. Computational sciences have been a key avenue to reduce and manage this uncertainty. In this review, we discuss at a relatively non-technical level the current state of three applications of computational sciences in the industry. The first of these is seismic imaging, which is currently being revolutionized by the emergence of full wavefield inversion, enabled by algorithmic advances and petascale computing. The second is reservoir simulation, also being advanced through the use of modern highly parallel computing architectures. Finally, we comment on the role of data analytics in the upstream industry. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597785
NASA Astrophysics Data System (ADS)
Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees
2010-05-01
Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source wavelets, source-side reverberations and illumination gaps. We exemplify the feasibility of SI by MDD on synthetic data, based on field data from the Laramie and the POLARIS-MIT array. van Groenestijn, G.J.A. & Verschuur, D.J., 2009. Estimation of primaries by sparse inversion from passive seismic data, Expanded abstracts, 1597-1601, SEG. van der Neut, J.R, Ruigrok, E.N., Draganov, D.S., & Wapenaar, K., 2010. Retrieving the earth's reflection response by multi-dimensional deconvolution of ambient seismic noise, Extended abstracts, submitted, EAGE. Wapenaar, K., van der Neut, J., & Ruigrok, E.N., 2008. Passive seismic interferometry by multidimensional deconvolution, Geophysics, 75, A51-A56.
Seismoelectric ground response to local and regional earthquakes
NASA Astrophysics Data System (ADS)
Dzieran, Laura; Rabbel, Wolfgang; Thorwart, Martin; Ritter, Oliver
2017-04-01
During earthquakes magnetotelluric stations occasionally record electric and magnetic signals similar to seismograms. The major part of these magnetic signals is induced by the seismic movement of the magnetometers (induction coils) in the static magnetic field. In contrast, the electric field signals are caused by the seismoelectric effect. Based on more than 600 earthquakes from Chile, Costa Rica and Europe we established a logarithmic magnitude-distance-relationship describing the magnitude threshold to be exceeded for observing seismoelectric (SE) signals with standard magnetotelluric (MT) recording units at given hypocentral distance r and for noise levels less than 3 μV/m. The log(r) term results from the geometric spreading of the radiated seismic waves. A comparison of SE signals at different hypocentral distances shows that observability is not only influenced by the amplitude of the incoming seismic wave. It also depends on the geological structure underneath the station which causes a unique frequency dependent SE response. To quantify these site effects we computed spectral seismoelectric transfer functions representing the ratios of the spectral amplitudes of SE records and acceleration seismograms (SESRs). Some stations show constant SESRs in the major frequency range, while others show a decrease with increasing frequencies. Based on the current Biot-type seismoelectric theory constant SESRs can be explained by coseismic SE waves alone. The observed SESR amplitudes at some sites are indeed consistent with theoretical expectations for electrically highly resistive soils or rocks, in agreement with the local geology of the investigated areas. The frequency dependence of SESRs observed at other locations can be explained if the incident SE waves consist not only of coseismic arrivals but also of a significant contribution from SE interface response waves which are generated at electrical or mechanical boundaries. Therefore, frequency-dependent SESRs can be regarded as an expression of a seismoelectric site effect, which depends strongly on the hydraulic and lithologic conditions underneath the recording station.
Microtremors for seismic response assessments of important modern and historical structures of Crete
NASA Astrophysics Data System (ADS)
Margarita, Moisidi; Filippos, Vallianatos
2017-12-01
Strengthening seismic risk resilience undertaken by the civil protection authorities is an important issue towards to the guidelines given by Sendai Framework, 2013 European Union Civil Protection legislation and the global agenda 2030 for sustainable development. Moreover, in recent years it has been emphasized that site effect specifications are important issues for the seismic hazard assessments of modern, historical and monumental structures. This study assess the frequencies of vibration of historical, monumental and modern structures in the cities of Chania, Rethymno and Heraklion of Crete using ambient noise recordings processed through the Horizontal to Vertical spectral ratio and examines potential soil-structure interaction phenomena. Examples of the seismic response of high rise structures such as a church bell tower and the lighthouses in Chania are presented.
Data Quality Control of the French Permanent Broadband Network in the RESIF Framework
NASA Astrophysics Data System (ADS)
Grunberg, Marc; Lambotte, Sophie; Engels, Fabien; Dretzen, Remi; Hernandez, Alain
2014-05-01
In the framework of the RESIF (Réseau Sismologique et géodésique Français) project, a new information system is being setting up, allowing the improvement of the management and the distribution of high quality data from the different elements of RESIF and the associated networks. Within this information system, EOST (in Strasbourg) is in charge of collecting real-time permanent broadband seismic waveform, and performing Quality Control on these data. The real-time and validated data set are pushed to the French National Distribution Center (Isterre/Grenoble) in order to make them publicly available. Furthermore EOST hosts the BCSF-ReNaSS, in charge of the French metropolitan seismic bulletin. This allows to benefit from some high-end quality control based on the national and world-wide seismicity. Here we present first the real-time seismic data flow from the stations of the French National Broad Band Network to EOST, and then, the data Quality Control procedures that were recently installed, including some new developments. The data Quality Control consists in applying a variety of subprocesses to check the consistency of the whole system and process from the stations to the data center. This allows us to verify that instruments and data transmission are operating correctly. Moreover analysis of the ambient noise helps to characterize intrinsic seismic quality of the stations and to identify other kind of disturbances. The deployed Quality Control consist in a pipeline that starts with low-level procedures : check the real-time miniseed data file (file naming convention, data integrity), check for inconsistencies between waveform and meta-data (channel name, sample rate, etc.), compute waveform statistics (data availability, gap/overlap, mean, rms, time quality, spike). It is followed by some high-level procedures such as : power spectral density computation (PSD), STA/LTA computation to be correlated to the seismicity, phases picking and stations magnitudes discrepancies. The results of quality control is visualized through a web interface. This latter gathers data from different information systems to provide a global view on last events that could impact the data (like intervention on site or seismic events, etc.). This work is still an ongoing project. We intend to add more sophisticated procedures to enhanced our data Quality Control. Among them, we will deploy a seismic moment tensor inversion tool for amplitude, time and polarity control and a noise correlation procedure for time drift detections.
Migration of scattered teleseismic body waves
NASA Astrophysics Data System (ADS)
Bostock, M. G.; Rondenay, S.
1999-06-01
The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
Coseismic Excitation of the Earth's Polar Motion
NASA Technical Reports Server (NTRS)
Chao, B. F.; Gross, R. S.
2000-01-01
Apart from the "shaking" near the epicenter that is the earthquake, a seismic event creates a permanent field of dislocation in the entire Earth. This redistribution of mass changes (slightly) the Earth's inertia tensor; and the Earth's rotation will change in accordance with the conservation of angular momentum. Similar to this seismic excitation of Earth rotation variations, the same mass redistribution causes (slight) changes in the Earth's gravitational field expressible in terms of changes in the Stokes coefficients of its harmonic expansion. In this paper, we give a historical background of the subject and discuss the related physics; we then compute the geodynamic effects caused by earthquakes based on normal-mode summation scheme. The effects are computed using the centroid moment tensor (CMT) solutions for 15,814 major earthquakes from Jan., 1977, through Feb., 1999, as provided in the Harvard CMT catalog. The computational results further strengthens these findings and conclusions: (i) the strong tendency for earthquakes to make the Earth rounder and more compact (however slightly) continues; (ii) so does the trend in the seismic "nudging" of the rotation pole toward the general direction of approx. 140 E, roughly opposite to that of the observed polar drift, but two orders of magnitude smaller in drift speed.
NASA Astrophysics Data System (ADS)
Cruz-Atienza, V. M.; Tago, J.; Villafuerte, C. D.; Chaljub, E.; Sanabria-Gómez, J. D.
2017-12-01
Built-up on top of ancient lake deposits, Mexico City experiences some of the largest seismic site effects in the world. The M7.1 intermediate-depth earthquake of September 19, 2017 (S19) collapsed 43 one-to-ten story buildings in the city close to the western edge of the lake-bed sediments, on top of the geotechnically-known transition zone. In this work we explore the physical reasons explaining such a damaging pattern and the long-lasting strong motion records well-documented from past events by means of new observations and high performance computational modeling. Besides the extreme amplification of seismic waves, duration of intense ground motion in the lake-bed lasts more than three times those recorded in hard-rock a few kilometers away. Different mechanisms contribute to the long lasting motions, such as the regional dispersion and multiple-scattering of the incoming wavefield all the way from the source. However, recent beamforming observations at hard-rock suggest that duration of the incoming field is significantly shorter than the strong shaking in the lake-bed. We show that despite the highly dissipative shallow deposits, seismic energy can propagate long distances in the deep structure of the valley, promoting also a large elongation of motion. Our simulations reveal that the seismic response of the basin is dominated by surface-waves overtones, and that this mechanism increases the duration of ground motion up to 280% and 500% of the incoming wavefield duration at 0.5 and 0.3 Hz, respectively. Furthermore, our results indicate that the damage pattern of the S19 earthquake is most likely due to the propagation of the fundamental mode in the transition zone of the basin. These conclusions contradicts what has been previously stated from observational and modeling investigations, where the basin itself has been discarded as a preponderant factor promoting long and devastating shaking in Mexico City. Reference: Cruz-Atienza, V. M., J. Tago, J. D. Sanabria-Gómez, E. Chaljub, V. Etienne, J. Virieux and L. Quintanar. Long Duration of Ground Motion in the Paradigmatic Valley of Mexico. Nature - Scientific Reports, 6, 38807; doi:10.1038/srep38807, 2016.
NASA Astrophysics Data System (ADS)
Lee, William H. K.; Engdahl, E. Robert
2015-02-01
Moment magnitude (MW) determinations from the online GCMT Catalogue of seismic moment tensor solutions (GCMT Catalog, 2011) have provided the bulk of MW values in the ISC-GEM Global Instrumental Reference Earthquake Catalogue (1900-2009) for almost all moderate-to-large earthquakes occurring after 1975. This paper describes an effort to determine MW of large earthquakes that occurred prior to the start of the digital seismograph era, based on credible assessments of thousands of seismic moment (M0) values published in the scientific literature by hundreds of individual authors. MW computed from the published M0 values (for a time period more than twice that of the digital era) are preferable to proxy MW values, especially for earthquakes with MW greater than about 8.5, for which MS is known to be underestimated or "saturated". After examining 1,123 papers, we compile a database of seismic moments and related information for 1,003 earthquakes with published M0 values, of which 967 were included in the ISC-GEM Catalogue. The remaining 36 earthquakes were not included in the Catalogue due to difficulties in their relocation because of inadequate arrival time information. However, 5 of these earthquakes with bibliographic M0 (and thus MW) are included in the Catalogue's Appendix. A search for reliable seismic moments was not successful for earthquakes prior to 1904. For each of the 967 earthquakes a "preferred" seismic moment value (if there is more than one) was selected and its uncertainty was estimated according to the data and method used. We used the IASPEI formula (IASPEI, 2005) to compute direct moment magnitudes (MW[M0]) based on the seismic moments (M0), and assigned their errors based on the uncertainties of M0. From 1900 to 1979, there are 129 great or near great earthquakes (MW ⩾ 7.75) - the bibliographic search provided direct MW values for 86 of these events (or 67%), the GCMT Catalog provided direct MW values for 8 events (or 6%), and the remaining 35 (or 27%) earthquakes have empirically determined proxy MW estimates. An electronic supplementary file is included with this paper in order to provide our M0/MW catalogue of earthquakes (1904-1978) from the published literature, and a reference list of the 1,123 papers that we examined.
NASA Astrophysics Data System (ADS)
Lane, R. J. L.
2015-12-01
At Geoscience Australia, we are upgrading our gravity and magnetic modeling tools to provide new insights into the composition, properties, and structure of the subsurface. The scale of the investigations varies from the size of tectonic plates to the size of a mineral prospect. To accurately model potential field data at all of these scales, we require modeling software that can operate in both spherical and Cartesian coordinate frameworks. The models are in the form of a mesh, with spherical prismatic (tesseroid) elements for spherical coordinate models of large volumes, and rectangular prisms for smaller volumes evaluated in a Cartesian coordinate framework. The software can compute the forward response of supplied rock property models and can perform inversions using constraints that vary from weak generic smoothness through to very specific reference models compiled from various types of "hard facts" (i.e., surface mapping, drilling information, crustal seismic interpretations). To operate efficiently, the software is being specifically developed to make use of the resources of the National Computational Infrastructure (NCI) at the Australian National University (ANU). The development of these tools is been carried out in collaboration with researchers from the Colorado School of Mines (CSM) and the China University of Geosciences (CUG) and is at the stage of advanced testing. The creation of individual 3D geological models will provide immediate insights. Users will also be able to combine models, either by stitching them together or by nesting smaller and more detailed models within a larger model. Comparison of the potential field response of a composite model with the observed fields will give users a sense of how comprehensively these models account for the observations. Users will also be able to model the residual fields (i.e., the observed minus calculated response) to discover features that are not represented in the input composite model.
Performance analysis of wireless sensor networks in geophysical sensing applications
NASA Astrophysics Data System (ADS)
Uligere Narasimhamurthy, Adithya
Performance is an important criteria to consider before switching from a wired network to a wireless sensing network. Performance is especially important in geophysical sensing where the quality of the sensing system is measured by the precision of the acquired signal. Can a wireless sensing network maintain the same reliability and quality metrics that a wired system provides? Our work focuses on evaluating the wireless GeoMote sensor motes that were developed by previous computer science graduate students at Mines. Specifically, we conducted a set of experiments, namely WalkAway and Linear Array experiments, to characterize the performance of the wireless motes. The motes were also equipped with the Sticking Heartbeat Aperture Resynchronization Protocol (SHARP), a time synchronization protocol developed by a previous computer science graduate student at Mines. This protocol should automatically synchronize the mote's internal clocks and reduce time synchronization errors. We also collected passive data to evaluate the response of GeoMotes to various frequency components associated with the seismic waves. With the data collected from these experiments, we evaluated the performance of the SHARP protocol and compared the performance of our GeoMote wireless system against the industry standard wired seismograph system (Geometric-Geode). Using arrival time analysis and seismic velocity calculations, we set out to answer the following question. Can our wireless sensing system (GeoMotes) perform similarly to a traditional wired system in a realistic scenario?
SEISMIC RESPONSE OF DAM WITH SOIL-STRUCTURE INTERACTION.
Bycroft, G.N.; Mork, P.N.
1987-01-01
An analytical solution to the response of a long trapezoidal-section dam on a foundation consisting of an elastic half-space and subjected to simulated earthquake motion is developed. An optimum seismic design is achieved when the cross section of the dam is triangular. The effect of soil structure interaction is to lower the strain occurring in the dam.
Multicomponent ensemble models to forecast induced seismicity
NASA Astrophysics Data System (ADS)
Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.
2018-01-01
In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.
The effect of material heterogeneities in long term multiscale seismic cycle simulations
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Richards-Dinger, K. B.; Dieterich, J. H.
2016-12-01
A fundamental part of the simulation of the earthquake cycles in large-scale multicycle earthquake simulators is the pre-computation of elastostatic Greens functions collected into the stiffness matrix (K). The stiffness matrices are typically based on the elastostatic solutions of Okada (1992), Gimbutas et al. (2012), or similar. While these analytic solutions are computationally very fast, they are limited to modeling a homogeneous isotropic half-space. It is thus unknown how such simulations may be affected by material heterogeneity characterizing the earth medium. We are currently working on the estimation of the effects of heterogeneous material properties in the earthquake simulator RSQSim (Richards-Dinger and Dieterich, 2012). In order to do that we are calculating elastostatic solutions in a heterogeneous medium using the Finite Element (FE) method instead of any of the analytical solutions. The investigated region is a 400 x 400 km area centered on the Anza zone in southern California. The fault system geometry is based on that of the UCERF3 deformation models in the area of interest, which we then implement in a finite element mesh using Trelis 15. The heterogeneous elastic structure is based on available tomographic data (seismic wavespeeds and density) for the region (SCEC CVM and Allam et al., 2014). For computation of the Greens functions we are using the open source FE code Defmod (https://bitbucket.org/stali/defmod/wiki/Home) to calculate the elastostatic solutions due to unit slip on each patch. Earthquake slip on the fault plane is implemented through linear constraint equations (Ali et al., 2014, Kyriakopoulos et al., 2013, Aagard et al, 2015) and more specifically with the use of Lagrange multipliers adjunction. The elementary responses are collected into the "heterogeneous" stiffness matrix Khet and used in RSQSim instead of the ones generated with Okada. Finally, we compare the RSQSim results based on the "heterogeneous" Khet with results from Khom (stiffness matrix generated from the same mesh as Khet but using homogeneous material properties). The estimation of the effect of heterogeneous material properties in the seismic cycles simulated by RSQSim is a needed experiment that will allow us to evaluate the impact of heterogeneities in earthquake simulators.
Analysis of the Pre-stack Split-Step Migration Operator Using Ritz Values
NASA Astrophysics Data System (ADS)
Kaplan, S. T.; Sacchi, M. D.
2009-05-01
The Born approximation for the acoustic wave-field is often used as a basis for developing algorithms in seismic imaging (migration). The approximation is linear, and, as such, can be written as a matrix-vector multiplication (Am=d). In the seismic imaging problem, d is seismic data (the recorded wave-field), and we aim to find the seismic reflectivity m (a representation of earth structure and properties) so that Am=d is satisfied. This is the often studied inverse problem of seismic migration, where given A and d, we solve for m. This can be done in a least-squares sense, so that the equation of interest is, AHAm = AHd. Hence, the solution m is largely dependent on the properties of AHA. The imaging Jacobian J provides an approximation to AHA, so that J-1AHA is, in a broad sense, better behaved then AHA. We attempt to quantify this last statement by providing an analysis of AHA and J-1AHA using their Ritz values, and for the particular case where A is built using a pre-stack split-step migration algorithm. Typically, one might try to analyze the behaviour of these matrices using their eigenvalue spectra. The difficulty in the analysis of AHA and J-1AHA lie in their size. For example, a subset of the relatively small Marmousi data set makes AHA a complex valued matrix with, roughly, dimensions of 45 million by 45 million (requiring, in single-precision, about 16 Peta-bytes of computer memory). In short, the size of the matrix makes its eigenvalues difficult to compute. Instead, we compute the leading principal minors of similar tridiagonal matrices, Bk=Vk-1AHAVk and Ck = Uk-1 J-1 AHAUk. These can be constructed using, for example, the Lanczos decomposition. Up to some value of k it is feasible to compute the eigenvalues of Bk and Ck which, in turn, are the Ritz values of, respectively, AHA and J-1 AHA, and may allow us to make quantitative statements about their behaviours.
Data and Workflow Management Challenges in Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Lei, W.; Ruan, Y.; Smith, J. A.; Modrak, R. T.; Orsvuran, R.; Krischer, L.; Chen, Y.; Balasubramanian, V.; Hill, J.; Turilli, M.; Bozdag, E.; Lefebvre, M. P.; Jha, S.; Tromp, J.
2017-12-01
It is crucial to take the complete physics of wave propagation into account in seismic tomography to further improve the resolution of tomographic images. The adjoint method is an efficient way of incorporating 3D wave simulations in seismic tomography. However, global adjoint tomography is computationally expensive, requiring thousands of wavefield simulations and massive data processing. Through our collaboration with the Oak Ridge National Laboratory (ORNL) computing group and an allocation on Titan, ORNL's GPU-accelerated supercomputer, we are now performing our global inversions by assimilating waveform data from over 1,000 earthquakes. The first challenge we encountered is dealing with the sheer amount of seismic data. Data processing based on conventional data formats and processing tools (such as SAC), which are not designed for parallel systems, becomes our major bottleneck. To facilitate the data processing procedures, we designed the Adaptive Seismic Data Format (ASDF) and developed a set of Python-based processing tools to replace legacy FORTRAN-based software. These tools greatly enhance reproducibility and accountability while taking full advantage of highly parallel system and showing superior scaling on modern computational platforms. The second challenge is that the data processing workflow contains more than 10 sub-procedures, making it delicate to handle and prone to human mistakes. To reduce human intervention as much as possible, we are developing a framework specifically designed for seismic inversion based on the state-of-the art workflow management research, specifically the Ensemble Toolkit (EnTK), in collaboration with the RADICAL team from Rutgers University. Using the initial developments of the EnTK, we are able to utilize the full computing power of the data processing cluster RHEA at ORNL while keeping human interaction to a minimum and greatly reducing the data processing time. Thanks to all the improvements, we are now able to perform iterations fast enough on more than a 1,000 earthquakes dataset. Starting from model GLAD-M15 (Bozdag et al., 2016), an elastic 3D model with a transversely isotropic upper mantle, we have successfully performed 5 iterations. Our goal is to finish 10 iterations, i.e., generating GLAD M25* by the end of this year.
NASA Astrophysics Data System (ADS)
Martin, Alexandre; Torrent, Marc; Caracas, Razvan
2015-03-01
A formulation of the response of a system to strain and electric field perturbations in the pseudopotential-based density functional perturbation theory (DFPT) has been proposed by D.R Hamman and co-workers. It uses an elegant formalism based on the expression of DFT total energy in reduced coordinates, the key quantity being the metric tensor and its first and second derivatives. We propose to extend this formulation to the Projector Augmented-Wave approach (PAW). In this context, we express the full elastic tensor including the clamped-atom tensor, the atomic-relaxation contributions (internal stresses) and the response to electric field change (piezoelectric tensor and effective charges). With this we are able to compute the elastic tensor for all materials (metals and insulators) within a fully analytical formulation. The comparison with finite differences calculations on simple systems shows an excellent agreement. This formalism has been implemented in the plane-wave based DFT ABINIT code. We apply it to the computation of elastic properties and seismic-wave velocities of iron with impurity elements. By analogy with the materials contained in meteorites, tested impurities are light elements (H, O, C, S, Si).
Development of Maximum Considered Earthquake Ground Motion Maps
Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.
2000-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.
NASA Astrophysics Data System (ADS)
Massa, Marco; Barani, Simone; Lovati, Sara
2014-06-01
The paper presents an extensive review of topographic effects in seismology taking into account the knowledge of 40 yr of scientific literature. An overview of topographic effects based on experimental observations and numerical modelling is presented with the aim of highlighting meaning and causes of these phenomena as well as possible correlations between site response (fundamental frequency, amplification level) and geometrical (width and shape ratio of a relief) parameters. After a thorough summary of topographic effects, the paper focuses on five Italian sites whose seismic response is potentially affected by local morphology, as already evidenced by previous studies. In this study, seismic data recorded at these sites are analysed computing directional spectral ratios both in terms of horizontal to vertical spectral ratios (HVSRs) and, wherever possible, in terms of standard spectral ratios (SSRs). The analysis lead to the conclusion that wavefield tends to be polarized along a direction perpendicular to the main axis of a topographic irregularity, direction along which ground motion amplification is maximum. The final section of the article compares and contrasts different spectral ratio techniques in order to examine their effectiveness and reliability in detecting topographic effects. The examples discussed in the paper show that site responses based on HVSRs rather than SSR measurements could lead to misinterpretation of ground response results, both as concerns the definition of the site fundamental frequency and amplification level. Results and findings of this work will be used as starting point to discuss the influence of topographic effects on ground motion prediction equations and regulations for design. These topics will be discussed in the companion article.
Multicomponent seismic loss estimation on the North Anatolian Fault Zone (Turkey)
NASA Astrophysics Data System (ADS)
karimzadeh Naghshineh, S.; Askan, A.; Erberik, M. A.; Yakut, A.
2015-12-01
Seismic loss estimation is essential to incorporate seismic risk of structures into an efficient decision-making framework. Evaluation of seismic damage of structures requires a multidisciplinary approach including earthquake source characterization, seismological prediction of earthquake-induced ground motions, prediction of structural responses exposed to ground shaking, and finally estimation of induced damage to structures. As the study region, Erzincan, a city on the eastern part of Turkey is selected which is located in the conjunction of three active strike-slip faults as North Anatolian Fault, North East Anatolian Fault and Ovacik fault. Erzincan city center is in a pull-apart basin underlain by soft sediments that has experienced devastating earthquakes such as the 27 December 1939 (Ms=8.0) and the 13 March 1992 (Mw=6.6) events, resulting in extensive amount of physical as well as economical losses. These losses are attributed to not only the high seismicity of the area but also as a result of the seismic vulnerability of the constructed environment. This study focuses on the seismic damage estimation of Erzincan using both regional seismicity and local building information. For this purpose, first, ground motion records are selected from a set of scenario events simulated with the stochastic finite fault methodology using regional seismicity parameters. Then, existing building stock are classified into specified groups represented with equivalent single-degree-of-freedom systems. Through these models, the inelastic dynamic structural responses are investigated with non-linear time history analysis. To assess the potential seismic damage in the study area, fragility curves for the classified structural types are derived. Finally, the estimated damage is compared with the observed damage during the 1992 Erzincan earthquake. The results are observed to have a reasonable match indicating the efficiency of the ground motion simulations and building analyses.
A method to establish seismic noise baselines for automated station assessment
McNamara, D.E.; Hutt, C.R.; Gee, L.S.; Benz, H.M.; Buland, R.P.
2009-01-01
We present a method for quantifying station noise baselines and characterizing the spectral shape of out-of-nominal noise sources. Our intent is to automate this method in order to ensure that only the highest-quality data are used in rapid earthquake products at NEIC. In addition, the station noise baselines provide a valuable tool to support the quality control of GSN and ANSS backbone data and metadata. The procedures addressed here are currently in development at the NEIC, and work is underway to understand how quickly changes from nominal can be observed and used within the NEIC processing framework. The spectral methods and software used to compute station baselines and described herein (PQLX) can be useful to both permanent and portable seismic stations operators. Applications include: general seismic station and data quality control (QC), evaluation of instrument responses, assessment of near real-time communication system performance, characterization of site cultural noise conditions, and evaluation of sensor vault design, as well as assessment of gross network capabilities (McNamara et al. 2005). Future PQLX development plans include incorporating station baselines for automated QC methods and automating station status report generation and notification based on user-defined QC parameters. The PQLX software is available through the USGS (http://earthquake. usgs.gov/research/software/pqlx.php) and IRIS (http://www.iris.edu/software/ pqlx/).
Seismograms live from around the world
Woodward, Robert L.; Shedlock, Kaye M.; Bolton, Harold F.
1999-01-01
You can view earthquakes as they happen! Seismograms from seismic stations around the world are broadcast live, via the Internet, and are updated every 30 minutes, With an Internet connection and a web browser, you can view current seismograms and earthquake locations on your own computer. With special software also available via the Internet, you can obtain seismic data as it arrives from a global network of seismograph stations.
Measuring the size of an earthquake
Spence, William; Sipkin, Stuart A.; Choy, George L.
1989-01-01
Today, state-of-the-art seismic systems transmit data from the seismograph via telephone line and satellite directly to a central digital computer. A preliminary location, depth-of-focus, and magnitude can now be obtained within minutes of onset of an earthquake. The only limiting factor is how long the seismic waves take to travel from the epicenter to the stations--usually less than 10 minutes.
Seismic Barrier Protection of Critical Infrastructure from Earthquakes
2017-05-01
structure composed of opposing boreholes or trenches to mitigate seismic waves from diffracting and traveling in the vertical plane. Computational...dams, etc., pose significant risk to civilians while adding tremendous cost and recovery time to regain their functionality. Lower energy earthquakes...the most destructive are surface waves (Rayleigh, Love, shear) which can travel great distances in the far field from the earthquake hypocenter and
Measuring the size of an earthquake
Spence, W.; Sipkin, S.A.; Choy, G.L.
1989-01-01
Today, state-of-the-art seismic systems transmit data from the seismograph via telephone line and satellite directly to a central digital computer. A preliminary location, depth-of-focus, and magntidue can now be obtained within minutes of the onset of an earthquake. The only limiting factor is how long the seismic wave stake to travel from the epicenter to the stations-usually less than 10 minutes.
NASA Astrophysics Data System (ADS)
Convers-Gomez, Carlos E.
The Vaca Muerta Formation in the Neuquen Basin has recently received a lot of attention from oil companies interested in developing its shale resources. Early identification of potential zones with possible good production is extremely important to optimize the return on capital investment. Developing a work flow in shale plays that associates an effective hydraulic fracture response with the presence of hydrocarbons is crucial for economic success. The vertical and lateral heterogeneity of rock properties are critical factors that impact production. The integration of 3D seismic and well data is necessary for prediction of rock properties and identifies their distribution in the rock, which can also be integrated with geomechanical properties to model the rock response favorable to hydraulic stimulation. This study includes a 3D seismic survey and six vertical wells with full log suites in each well. The well logs allowed for the computation of a pre-stack model-based inversion which uses seismic data to estimate rock property volumes. An inverse relationship between P-impedance and Total Organic Content (TOC) was observed and quantified. Likewise, a direct relationship between P-impedance and volume of carbonate was observed. The volume of kerogen, type of clay, type of carbonate and fluid pressure all control the geomechanical properties of the formation when subject to hydraulic fracturing. Probabilistic Neural Networks were then used to predict the lateral and vertical heterogeneity of rock properties. TOC and volume of kerogen behaved as adequate indicators of possible zones with high presence of hydrocarbons. Meanwhile, the volume of carbonate was a valid indicator of brittle-ductile rock. The predicted density volume was used to estimate geomechanical properties (Young's Modulus and Poisson's Ratio) and to identify the zones that have a better response to hydraulic stimulation. During the analysis of geomechanical properties, Young's Modulus was observed to have a direct relationship with volume of carbonate and an inverse relationship with TOC, enabling the identification of brittle and ductile rocks zones. The analysis detected zones that had a good presence of hydrocarbons and brittle rock. The information was integrated with the analysis of geomechanical properties generating a model with the most possible zones of good production. This model will aid in the future exploration and development of the Vaca Muerta Formation.
Crowd-Sourcing Seismic Data: Lessons Learned from the Quake-Catcher Network
NASA Astrophysics Data System (ADS)
Cochran, E. S.; Sumy, D. F.; DeGroot, R. M.; Clayton, R. W.
2015-12-01
The Quake Catcher Network (QCN; qcn.caltech.edu) uses low cost micro-electro-mechanical system (MEMS) sensors hosted by volunteers to collect seismic data. Volunteers use accelerometers internal to laptop computers, phones, tablets or small (the size of a matchbox) MEMS sensors plugged into desktop computers using a USB connector to collect scientifically useful data. Data are collected and sent to a central server using the Berkeley Open Infrastructure for Network Computing (BOINC) distributed computing software. Since 2008, when the first citizen scientists joined the QCN project, sensors installed in museums, schools, offices, and residences have collected thousands of earthquake records. We present and describe the rapid installations of very dense sensor networks that have been undertaken following several large earthquakes including the 2010 M8.8 Maule Chile, the 2010 M7.1 Darfield, New Zealand, and the 2015 M7.8 Gorkha, Nepal earthquake. These large data sets allowed seismologists to develop new rapid earthquake detection capabilities and closely examine source, path, and site properties that impact ground shaking at a site. We show how QCN has engaged a wide sector of the public in scientific data collection, providing the public with insights into how seismic data are collected and used. Furthermore, we describe how students use data recorded by QCN sensors installed in their classrooms to explore and investigate earthquakes that they felt, as part of 'teachable moment' exercises.
The Quake-Catcher Network: An Innovative Community-Based Seismic Network
NASA Astrophysics Data System (ADS)
Saltzman, J.; Cochran, E. S.; Lawrence, J. F.; Christensen, C. M.
2009-12-01
The Quake-Catcher Network (QCN) is a volunteer computing seismic network that engages citizen scientists, teachers, and museums to participate in the detection of earthquakes. In less than two years, the network has grown to over 1000 participants globally and continues to expand. QCN utilizes Micro-Electro-Mechanical System (MEMS) accelerometers, in laptops and external to desktop computers, to detect moderate to large earthquakes. One goal of the network is to involve K-12 classrooms and museums by providing sensors and software to introduce participants to seismology and community-based scientific data collection. The Quake-Catcher Network provides a unique opportunity to engage participants directly in the scientific process, through hands-on activities that link activities and outcomes to their daily lives. Partnerships with teachers and museum staff are critical to growth of the Quake Catcher Network. Each participating institution receives a MEMS accelerometer to connect, via USB, to a computer that can be used for hands-on activities and to record earthquakes through a distributed computing system. We developed interactive software (QCNLive) that allows participants to view sensor readings in real time. Participants can also record earthquakes and download earthquake data that was collected by their sensor or other QCN sensors. The Quake-Catcher Network combines research and outreach to improve seismic networks and increase awareness and participation in science-based research in K-12 schools.
NASA Technical Reports Server (NTRS)
Noor, A. K. (Editor); Hayduk, R. J. (Editor)
1985-01-01
Among the topics discussed are developments in structural engineering hardware and software, computation for fracture mechanics, trends in numerical analysis and parallel algorithms, mechanics of materials, advances in finite element methods, composite materials and structures, determinations of random motion and dynamic response, optimization theory, automotive tire modeling methods and contact problems, the damping and control of aircraft structures, and advanced structural applications. Specific topics covered include structural design expert systems, the evaluation of finite element system architectures, systolic arrays for finite element analyses, nonlinear finite element computations, hierarchical boundary elements, adaptive substructuring techniques in elastoplastic finite element analyses, automatic tracking of crack propagation, a theory of rate-dependent plasticity, the torsional stability of nonlinear eccentric structures, a computation method for fluid-structure interaction, the seismic analysis of three-dimensional soil-structure interaction, a stress analysis for a composite sandwich panel, toughness criterion identification for unidirectional composite laminates, the modeling of submerged cable dynamics, and damping synthesis for flexible spacecraft structures.
Seismic Constraints on the Mantle Viscosity Structure beneath Antarctica
NASA Astrophysics Data System (ADS)
Wiens, Douglas; Heeszel, David; Aster, Richard; Nyblade, Andrew; Wilson, Terry
2015-04-01
Lateral variations in upper mantle viscosity structure can have first order effects on glacial isostatic adjustment. These variations are expected to be particularly large for the Antarctic continent because of the stark geological contrast between ancient cratonic and recent tectonically active terrains in East and West Antarctica, respectively. A large misfit between observed and predicted GPS rates for West Antarctica probably results in part from the use of a laterally uniform viscosity structure. Although not linked by a simple relationship, mantle seismic velocities can provide important constraints on mantle viscosity structure, as they are both largely controlled by temperature and water content. Recent higher resolution seismic models for the Antarctic mantle, derived from data acquired by new seismic stations deployed in the AGAP/GAMSEIS and ANET/POLENET projects, offer the opportunity to use the seismic velocity structure to place new constraints on the viscosity of the Antarctic upper mantle. We use an Antarctic shear wave velocity model derived from array analysis of Rayleigh wave phase velocities [Heeszel et al, in prep] and examine a variety of methodologies for relating seismic, thermal and rheological parameters to compute a suite of viscosity models for the Antarctic mantle. A wide variety of viscosity structures can be derived using various assumptions, but they share several robust common elements. There is a viscosity contrast of at least two orders of magnitude between East and West Antarctica at depths of 80-250 km, reflecting the boundary between cold cratonic lithosphere in East Antarctica and warm upper mantle in West Antarctica. The region beneath the Ellsworth-Whitmore Mtns and extending to the Pensacola Mtns. shows intermediate viscosity between the extremes of East and West Antarctica. There are also significant variations between different parts of West Antarctica, with the lowest viscosity occurring beneath the Marie Byrd Land (MBL). The MBL Dome and adjacent coastal areas show extremely low viscosity (~1018Pa-s) for most parameterizations, suggesting that low mantle viscosity may produce a very rapid response to ice mass loss in this region.
Combining active and passive seismic methods for the characterization of urban sites in Cairo, Egypt
NASA Astrophysics Data System (ADS)
Adly, Ashraf; Poggi, Valerio; Fäh, Donat; Hassoup, Awad; Omran, Awad
2017-07-01
The geology at Kottamiya, Rehab City and Zahraa-Madinat-Nasr to the East of Cairo (Egypt) is composed of low-velocity sediments on top of a rigid rock basement. Such sediments include the loose sands of the Gebel Ahmar formation, marl and shales of Maadi formation, in addition to sparse quaternary soil covers. Due to the contrast of the seismic impedance with the underlying bedrock, these soft sediments have the potential of considerably amplifying the ground motion during an earthquake. For the evaluation of site-specific seismic hazard, we computed the seismic site response in these areas by developing 1-D velocity models and derived average seismic velocities, including Vs30. To do that, we applied different active and passive source techniques, including the horizontal to vertical Fourier spectral ratio of ambient vibration recordings and multichannel analysis of artificially generated surface waves. A set of models representing the velocity structure of the site is then obtained by combined inversion of Rayleigh wave dispersion curves and ellipticity functions. While dispersion curves are used to constrain the uppermost low-velocity part of the soil profile, ellipticity helps in resolving the structure at the depth of the sediment-bedrock interface. From the retrieved velocity models, numerical ground-motion amplification is finally derived using 1-D SH-wave transfer function. We account for uncertainty in amplification by using a statistical model that accounts for the misfit of all the inverted velocity profiles. The study reveals that the different sites experience an important frequency-dependent amplification, with largest amplification occurring at the resonance frequencies of the sites. Amplification up to a factor of 5 is found, with some variability depending on the soil type (Vs30 ranges between 340 and 415 m s-2). Moreover, amplification is expected in the frequency range that is important for buildings (0.8-10 Hz), which is additional confirmation for the need of microzonation analysis of the area. The obtained results will be used for the development of a new seismic hazard model.
Three-component borehole wall-locking seismic detector
Owen, Thomas E.
1994-01-01
A seismic detector for boreholes is described that has an accelerometer sensor block for sensing vibrations in geologic formations of the earth. The density of the seismic detector is approximately matched to the density of the formations in which the detector is utilized. A simple compass is used to orient the seismic detector. A large surface area shoe having a radius approximately equal to the radius of the borehole in which the seismic detector is located may be pushed against the side of the borehole by actuating cylinders contained in the seismic detector. Hydraulic drive of the cylinders is provided external to the detector. By using the large surface area wall-locking shoe, force holding the seismic detector in place is distributed over a larger area of the borehole wall thereby eliminating concentrated stresses. Borehole wall-locking forces up to ten times the weight of the seismic detector can be applied thereby ensuring maximum detection frequency response up to 2,000 hertz using accelerometer sensors in a triaxial array within the seismic detector.
NASA Astrophysics Data System (ADS)
Norbeck, J. H.; Rubinstein, J. L.
2018-04-01
The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. We develop a reservoir model to calculate the hydrologic conditions associated with the activity of 902 saltwater disposal wells injecting into the Arbuckle aquifer. Estimates of basement fault stressing conditions inform a rate-and-state friction earthquake nucleation model to forecast the seismic response to injection. Our model replicates many salient features of the induced earthquake sequence, including the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. We present evidence for variable time lags between changes in injection and seismicity rates, consistent with the prediction from rate-and-state theory that seismicity rate transients occur over timescales inversely proportional to stressing rate. Given the efficacy of the hydromechanical model, as confirmed through a likelihood statistical test, the results of this study support broader integration of earthquake physics within seismic hazard analysis.
GISMO: A MATLAB toolbox for seismic research, monitoring, & education
NASA Astrophysics Data System (ADS)
Thompson, G.; Reyes, C. G.; Kempler, L. A.
2017-12-01
GISMO is an open-source MATLAB toolbox which provides an object-oriented framework to build workflows and applications that read, process, visualize and write seismic waveform, catalog and instrument response data. GISMO can retrieve data from a variety of sources (e.g. FDSN web services, Earthworm/Winston servers) and data formats (SAC, Seisan, etc.). It can handle waveform data that crosses file boundaries. All this alleviates one of the most time consuming part for scientists developing their own codes. GISMO simplifies seismic data analysis by providing a common interface for your data, regardless of its source. Several common plots are built-in to GISMO, such as record section plots, spectrograms, depth-time sections, event count per unit time, energy release per unit time, etc. Other visualizations include map views and cross-sections of hypocentral data. Several common processing methods are also included, such as an extensive set of tools for correlation analysis. Support is being added to interface GISMO with ObsPy. GISMO encourages community development of an integrated set of codes and accompanying documentation, eliminating the need for seismologists to "reinvent the wheel". By sharing code the consistency and repeatability of results can be enhanced. GISMO is hosted on GitHub with documentation both within the source code and in the project wiki. GISMO has been used at the University of South Florida and University of Alaska Fairbanks in graduate-level courses including Seismic Data Analysis, Time Series Analysis and Computational Seismology. GISMO has also been tailored to interface with the common seismic monitoring software and data formats used by volcano observatories in the US and elsewhere. As an example, toolbox training was delivered to researchers at INETER (Nicaragua). Applications built on GISMO include IceWeb (e.g. web-based spectrograms), which has been used by Alaska Volcano Observatory since 1998 and became the prototype for the USGS Pensive system.
Earthquake Hazard Mitigation Using a Systems Analysis Approach to Risk Assessment
NASA Astrophysics Data System (ADS)
Legg, M.; Eguchi, R. T.
2015-12-01
The earthquake hazard mitigation goal is to reduce losses due to severe natural events. The first step is to conduct a Seismic Risk Assessment consisting of 1) hazard estimation, 2) vulnerability analysis, 3) exposure compilation. Seismic hazards include ground deformation, shaking, and inundation. The hazard estimation may be probabilistic or deterministic. Probabilistic Seismic Hazard Assessment (PSHA) is generally applied to site-specific Risk assessments, but may involve large areas as in a National Seismic Hazard Mapping program. Deterministic hazard assessments are needed for geographically distributed exposure such as lifelines (infrastructure), but may be important for large communities. Vulnerability evaluation includes quantification of fragility for construction or components including personnel. Exposure represents the existing or planned construction, facilities, infrastructure, and population in the affected area. Risk (expected loss) is the product of the quantified hazard, vulnerability (damage algorithm), and exposure which may be used to prepare emergency response plans, retrofit existing construction, or use community planning to avoid hazards. The risk estimate provides data needed to acquire earthquake insurance to assist with effective recovery following a severe event. Earthquake Scenarios used in Deterministic Risk Assessments provide detailed information on where hazards may be most severe, what system components are most susceptible to failure, and to evaluate the combined effects of a severe earthquake to the whole system or community. Casualties (injuries and death) have been the primary factor in defining building codes for seismic-resistant construction. Economic losses may be equally significant factors that can influence proactive hazard mitigation. Large urban earthquakes may produce catastrophic losses due to a cascading of effects often missed in PSHA. Economic collapse may ensue if damaged workplaces, disruption of utilities, and resultant loss of income produces widespread default on payments. With increased computational power and more complete inventories of exposure, Monte Carlo methods may provide more accurate estimation of severe losses and the opportunity to increase resilience of vulnerable systems and communities.
Building Capacity for Earthquake Monitoring: Linking Regional Networks with the Global Community
NASA Astrophysics Data System (ADS)
Willemann, R. J.; Lerner-Lam, A.
2006-12-01
Installing or upgrading a seismic monitoring network is often among the mitigation efforts after earthquake disasters, and this is happening in response to the events both in Sumatra during December 2004 and in Pakistan during October 2005. These networks can yield improved hazard assessment, more resilient buildings where they are most needed, and emergency relief directed more quickly to the worst hit areas after the next large earthquake. Several commercial organizations are well prepared for the fleeting opportunity to provide the instruments that comprise a seismic network, including sensors, data loggers, telemetry stations, and the computers and software required for the network center. But seismic monitoring requires more than hardware and software, no matter how advanced. A well-trained staff is required to select appropriate and mutually compatible components, install and maintain telemetered stations, manage and archive data, and perform the analyses that actually yield the intended benefits. Monitoring is more effective when network operators cooperate with a larger community through free and open exchange of data, sharing information about working practices, and international collaboration in research. As an academic consortium, a facility operator and a founding member of the International Federation of Digital Seismographic Networks, IRIS has access to a broad range of expertise with the skills that are required to help design, install, and operate a seismic network and earthquake analysis center, and stimulate the core training for the professional teams required to establish and maintain these facilities. But delivering expertise quickly when and where it is unexpectedly in demand requires advance planning and coordination in order to respond to the needs of organizations that are building a seismic network, either with tight time constraints imposed by the budget cycles of aid agencies following a disastrous earthquake, or as part of more informed national programs for hazard assessment and mitigation.
NASA Astrophysics Data System (ADS)
Konstantaras, Anthony; Katsifarakis, Emmanouil; Artzouxaltzis, Xristos; Makris, John; Vallianatos, Filippos; Varley, Martin
2010-05-01
This paper is a preliminary investigation of the possible correlation of temporal and energy release patterns of seismic activity involving the preparation processes of consecutive sizeable seismic events [1,2]. The background idea is that during periods of low-level seismic activity, stress processes in the crust accumulate energy at the seismogenic area whilst larger seismic events act as a decongesting mechanism releasing considerable energy [3,4]. A dynamic algorithm is being developed aiming to identify and cluster pre- and post- seismic events to the main earthquake following on research carried out by Zubkov [5] and Dobrovolsky [6,7]. This clustering technique along with energy release equations dependent on Richter's scale [8,9] allow for an estimate to be drawn regarding the amount of the energy being released by the seismic sequence. The above approach is being implemented as a monitoring tool to investigate the behaviour of the underlying energy management system by introducing this information to various neural [10,11] and soft computing models [1,12,13,14]. The incorporation of intelligent systems aims towards the detection and simulation of the possible relationship between energy release patterns and time-intervals among consecutive sizeable earthquakes [1,15]. Anticipated successful training of the imported intelligent systems may result in a real-time, on-line processing methodology [1,16] capable to dynamically approximate the time-interval between the latest and the next forthcoming sizeable seismic event by monitoring the energy release process in a specific seismogenic area. Indexing terms: pattern recognition, long-term earthquake precursors, neural networks, soft computing, earthquake occurrence intervals References [1] Konstantaras A., Vallianatos F., Varley M.R. and Makris J. P.: ‘Soft computing modelling of seismicity in the southern Hellenic arc', IEEE Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [2] Eneva M. and Ben-Zion Y.: ‘Techniques and parameters to analyze seismicity patterns associated with large earthquakes', Geophysics Res., vol. 102, pp. 17785-17795, 1997a [3] Habermann R. E.: ‘Precursory seismic quiescence: past, present and future', Pure Applied Geophysics, vol. 126, pp. 279-318, 1988 [4] Matthews M. V. and Reasenberg P. A.: ‘Statistical methods for investigating quiescence and other temporal seismicity patterns', Pure Applied Geophysics, vol. 126, pp. 357-372, 1988 [5] Zubkov S. I.: ‘The appearance times of earthquake precursors', Izv. Akad. Nauk SSSR Fiz. Zemli (Solid Earth), No. 5, pp. 87-91, 1987 [6] Dobrovolsky I. P., Zubkov S. I. and Miachkin V. I.: ‘Estimation of the size of earthquake preparation zones', Pageoph, vol. 117, pp. 1025-1044, 1979 [7] Dobrovolsky I. P., Gershenzon N. I. And Gokhberg M. B.: ‘Theory of electrokinetic effects occurring at the final stage in the preparation of a tectonic earthquake', Physics of the Earth and Planetary Interiors, vol. 57, pp. 144-156, 1989 [8] Richter C. F.: ‘Elementary Seismology', W.H.Freeman and Co., San Francisco, 1958 [9] Choy G. L. and Boatwright J. L.: ‘Global patterns of radiated seismic energy and apparent stress', Journal of Geophysical Research, vol. 84 (B5), pp. 2348-2350, 1995 [10] Haykin S.: ‘Neural Networks', 2nd Edition, Prentice Hall, 1999 [11] Jang J., Sun T. and Mizutany E.: ‘Neuro-fuzzy and soft computing', Prentice Hall, Upper Saddle River, NJ, 1997 [12] Konstantaras A., Varley M.R., Vallianatos F., Collins G. and Holifield P.: ‘Detection of weak seismo-electric signals upon the recordings of the electrotelluric field by means of neuron-fuzzy technology', IEEE Geoscience and Remote Sensing Letters, vol. 4 (1), 2007 [13] Konstantaras A., Varley M.R., Vallianatos F., Collins G. and Holifield P.: ‘Neuro-fuzzy prediction-based adaptive filtering applied to severely distorted magnetic field recordings', IEEE Geoscience and Remote Sensing Letters, vol. 3 (4), 2006 [14] Maravelakis E., Bilalis N., Keith J. and Antoniadis A.: ‘Measuring and Benchmarking the Innovativeness of SME's: a three dimensional Fuzzy Logic Approach', Production Planning and Control Journal, vol. 17 (3), pp. 283-292, 2006 [15] Bodri B.: ‘A neural-network model for earthquake occurrence', Geodynamics, vol. 32, pp. 289-310, 2001 [16] Skounakis E., Karagiannis V. and Vlissidis A.: ‘A Versatile System for Real-time Analyzing and Testing Objects Quality', Proceedings-CD of the 4th International Conference on "New Horizons in Industry, Business and Education" (NHIBE 2005), Corfu, Greece, pp. 701-708, 2005
Seismic hazard map of North and Central America and the Caribbean
Shedlock, K.M.
1999-01-01
Minimization of the loss of life, property damage, and social and economic disruption due to earthquakes depends on reliable estimates of seismic hazard. National, state, and local government, decision makers, engineers, planners, emergency response organizations, builders, universities, and the general public require seismic hazard estimates for land use planning, improved building design and construction (including adoption of building construction codes), emergency response preparedness plans, economic forecasts, housing and employment decisions, and many more types of risk mitigation. The seismic hazard map of North and Central America and the Caribbean is the concatenation of various national and regional maps, involving a suite of approaches. The combined maps and documentation provide a useful regional seismic hazard framework and serve as a resource for any national or regional agency for further detailed studies applicable to their needs. This seismic hazard map depicts Peak Ground Acceleration (PGA) with a 10% chance of exceedance in 50 years. PGA, a short-period ground motion parameter that is proportional to force, is the most commonly mapped ground motion parameter because current building codes that include seismic provisions specify the horizontal force a building should be able to withstand during an earthquake. This seismic hazard map of North and Central America and the Caribbean depicts the likely level of short-period ground motion from earthquakes in a fifty-year window. Short-period ground motions effect short-period structures (e.g., one-to-two story buildings). The highest seismic hazard values in the region generally occur in areas that have been, or are likely to be, the sites of the largest plate boundary earthquakes.
Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA
NASA Astrophysics Data System (ADS)
Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.
2017-12-01
Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA
Seismic performance assessment of base-isolated safety-related nuclear structures
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2010-01-01
Seismic or base isolation is a proven technology for reducing the effects of earthquake shaking on buildings, bridges and infrastructure. The benefit of base isolation has been presented in terms of reduced accelerations and drifts on superstructure components but never quantified in terms of either a percentage reduction in seismic loss (or percentage increase in safety) or the probability of an unacceptable performance. Herein, we quantify the benefits of base isolation in terms of increased safety (or smaller loss) by comparing the safety of a sample conventional and base-isolated nuclear power plant (NPP) located in the Eastern U.S. Scenario- and time-based assessments are performed using a new methodology. Three base isolation systems are considered, namely, (1) Friction Pendulum??? bearings, (2) lead-rubber bearings and (3) low-damping rubber bearings together with linear viscous dampers. Unacceptable performance is defined by the failure of key secondary systems because these systems represent much of the investment in a new build power plant and ensure the safe operation of the plant. For the scenario-based assessments, the probability of unacceptable performance is computed for an earthquake with a magnitude of 5.3 at a distance 7.5 km from the plant. For the time-based assessments, the annual frequency of unacceptable performance is computed considering all potential earthquakes that may occur. For both assessments, the implementation of base isolation reduces the probability of unacceptable performance by approximately four orders of magnitude for the same NPP superstructure and secondary systems. The increase in NPP construction cost associated with the installation of seismic isolators can be offset by substantially reducing the required seismic strength of secondary components and systems and potentially eliminating the need to seismically qualify many secondary components and systems. ?? 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Simutė, S.; Fichtner, A.
2015-12-01
We present a feasibility study for seismic source inversions using a 3-D velocity model for the Japanese Islands. The approach involves numerically calculating 3-D Green's tensors, which is made efficient by exploiting Green's reciprocity. The rationale for 3-D seismic source inversion has several aspects. For structurally complex regions, such as the Japan area, it is necessary to account for 3-D Earth heterogeneities to prevent unknown structure polluting source solutions. In addition, earthquake source characterisation can serve as a means to delineate existing faults. Source parameters obtained for more realistic Earth models can then facilitate improvements in seismic tomography and early warning systems, which are particularly important for seismically active areas, such as Japan. We have created a database of numerically computed 3-D Green's reciprocals for a 40°× 40°× 600 km size area around the Japanese Archipelago for >150 broadband stations. For this we used a regional 3-D velocity model, recently obtained from full waveform inversion. The model includes attenuation and radial anisotropy and explains seismic waveform data for periods between 10 - 80 s generally well. The aim is to perform source inversions using the database of 3-D Green's tensors. As preliminary steps, we present initial concepts to address issues that are at the basis of our approach. We first investigate to which extent Green's reciprocity works in a discrete domain. Considering substantial amounts of computed Green's tensors we address storage requirements and file formatting. We discuss the importance of the initial source model, as an intelligent choice can substantially reduce the search volume. Possibilities to perform a Bayesian inversion and ways to move to finite source inversion are also explored.
Seismic Window Selection and Misfit Measurements for Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Lei, W.; Bozdag, E.; Lefebvre, M.; Podhorszki, N.; Smith, J. A.; Tromp, J.
2013-12-01
Global Adjoint Tomography requires fast parallel processing of large datasets. After obtaing the preprocessed observed and synthetic seismograms, we use the open source software packages FLEXWIN (Maggi et al. 2007) to select time windows and MEASURE_ADJ to make measurements. These measurements define adjoint sources for data assimilation. Previous versions of these tools work on a pair of SAC files---observed and synthetic seismic data for the same component and station, and loop over all seismic records associated with one earthquake. Given the large number of stations and earthquakes, the frequent read and write operations create severe I/O bottlenecks on modern computing platforms. We present new versions of these tools utilizing a new seismic data format, namely the Adaptive Seismic Data Format(ASDF). This new format shows superior scalability for applications on high-performance computers and accommodates various types of data, including earthquake, industry and seismic interferometry datasets. ASDF also provides user-friendly APIs, which can be easily integrated into the adjoint tomography workflow and combined with other data processing tools. In addition to solving the I/O bottleneck, we are making several improvements to these tools. For example, FLEXWIN is tuned to select windows for different types of earthquakes. To capture their distinct features, we categorize earthquakes by their depths and frequency bands. Moreover, instead of only picking phases between the first P arrival and the surface-wave arrivals, our aim is to select and assimilate many other later prominent phases in adjoint tomography. For example, in the body-wave band (17 s - 60 s), we include SKS, sSKS and their multiple, while in the surface-wave band (60 s - 120 s) we incorporate major-arc surface waves.
NASA Astrophysics Data System (ADS)
Son, J.; Medina-Cetina, Z.
2017-12-01
We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.
CISN ShakeAlert: Using early warnings for earthquakes in California
NASA Astrophysics Data System (ADS)
Vinci, M.; Hellweg, M.; Jones, L. M.; Khainovski, O.; Schwartz, K.; Lehrer, D.; Allen, R. M.; Neuhauser, D. S.
2009-12-01
Educated users who have developed response plans and procedures are just as important for an earthquake early warning (EEW) system as are the algorithms and computers that process the data and produce the warnings. In Japan, for example, the implementation of the EEW system which now provides advanced alerts of ground shaking included intense outreach efforts to both institutional and individual recipients. Alerts are now used in automatic control systems that stop trains, place sensitive equipment in safe mode and isolate hazards while the public takes cover. In California, the California Integrated Seismic Network (CISN) is now developing and implementing components of a prototype system for EEW, ShakeAlert. As this processing system is developed, we invite a suite of perspective users from critical industries and institutions throughout California to partner with us in developing useful ShakeAlert products and procedures. At the same time, we will support their efforts to determine and implement appropriate responses to an early warning of earthquake shaking. As a first step, in a collaboration with BART, we have developed a basic system allowing BART’s operation center to receive realtime ground shaking information from more than 150 seismic stations operating in the San Francisco Bay Area. BART engineers are implementing a display system for this information. Later phases will include the development of improved response procedures utilizing this information. We plan to continue this collaboration to include more sophisticated information from the prototype CISN ShakeAlert system.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Seismic event classification system
Dowla, F.U.; Jarpe, S.P.; Maurer, W.
1994-12-13
In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities. 21 figures.
Cloud Computing Services for Seismic Networks
NASA Astrophysics Data System (ADS)
Olson, Michael
This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN---the Community Seismic Network---which uses relatively low-cost sensors deployed by members of the community, and (2) SAF---the Situation Awareness Framework---which integrates data from multiple sources, including the CSN, CISN---the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California---and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.
Pollitz, F.F.
2002-01-01
I present a new algorithm for calculating seismic wave propagation through a three-dimensional heterogeneous medium using the framework of mode coupling theory originally developed to perform very low frequency (f < ???0.01-0.05 Hz) seismic wavefield computation. It is a Greens function approach for multiple scattering within a defined volume and employs a truncated traveling wave basis set using the locked mode approximation. Interactions between incident and scattered wavefields are prescribed by mode coupling theory and account for the coupling among surface waves, body waves, and evanescent waves. The described algorithm is, in principle, applicable to global and regional wave propagation problems, but I focus on higher frequency (typically f ??????0.25 Hz) applications at regional and local distances where the locked mode approximation is best utilized and which involve wavefields strongly shaped by propagation through a highly heterogeneous crust. Synthetic examples are shown for P-SV-wave propagation through a semi-ellipsoidal basin and SH-wave propagation through a fault zone.
Seismic event classification system
Dowla, Farid U.; Jarpe, Stephen P.; Maurer, William
1994-01-01
In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities.
Necessary Conditions for Intraplate Seismic Zones in North America
NASA Astrophysics Data System (ADS)
Thomas, William A.; Powell, Christine A.
2017-12-01
The cause of intraplate seismic zones persists as an important scientific and societal question. Most intraplate earthquakes are concentrated in specific seismic zones along or adjacent to large-scale basement structures (e.g., rifts or sutures at ancient plate boundaries) within continental crust. The major intraplate seismic zones are limited to specific segments and are not distributed along the lengths of the ancient structures. We present a new hypothesis that major intraplate seismic zones are restricted to places where concentrated crustal deformation (CCD) is overprinted on large-scale basement structures. Examples where CCD affects the stability of specific parts of large-scale structures in response to present-day stress conditions include the most active seismic zones in central and eastern North America: Charlevoix, Eastern Tennessee, and New Madrid. Our hypothesis has important implications for the assessment of seismic hazards.
NASA Astrophysics Data System (ADS)
Huang, Duruo; Du, Wenqi; Zhu, Hong
2017-10-01
In performance-based seismic design, ground-motion time histories are needed for analyzing dynamic responses of nonlinear structural systems. However, the number of ground-motion data at design level is often limited. In order to analyze seismic performance of structures, ground-motion time histories need to be either selected from recorded strong-motion database or numerically simulated using stochastic approaches. In this paper, a detailed procedure to select proper acceleration time histories from the Next Generation Attenuation (NGA) database for several cities in Taiwan is presented. Target response spectra are initially determined based on a local ground-motion prediction equation under representative deterministic seismic hazard analyses. Then several suites of ground motions are selected for these cities using the Design Ground Motion Library (DGML), a recently proposed interactive ground-motion selection tool. The selected time histories are representatives of the regional seismic hazard and should be beneficial to earthquake studies when comprehensive seismic hazard assessments and site investigations are unavailable. Note that this method is also applicable to site-specific motion selections with the target spectra near the ground surface considering the site effect.
Gas hydrate characterization from a 3D seismic dataset in the deepwater eastern Gulf of Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, Daniel; Haneberg, William C.
Seismic stratigraphic features are delineated using principal component analysis of the band limited data at potential gas hydrate sands, and compared and calibrated with spectral decomposition thickness to constrain thickness in the absence of well control. Layers in the abyssal fan sediments are thinner than can be resolved with 50 Hz seismic and thus comprise composite thin-bed reflections. Amplitude vs frequency analysis are used to indicate gas and gas hydrate reflections. Synthetic seismic wedge models show that with 50Hz seismic data, a 40% saturation of a Plio Pleistocene GoM sand in the hydrate stability zone with no subjacent gas canmore » produce a phase change (negative to positive) with a strong correlation between amplitude and hydrate saturation. The synthetic seismic response is more complicated if the gas hydrate filled sediments overlie gassy sediments. Hydrate (or gas) saturation in thin beds enhances the amplitude response and can be used to estimate saturation. Gas hydrate saturation from rock physics, amplitude, and frequency analysis is compared to saturation derived from inversion at several interpreted gas hydrate accumulations in the eastern Gulf of Mexico.« less
NASA Astrophysics Data System (ADS)
Barberi, G.; Cammarata, L.; Cocina, O.; Maiolino, V.; Musumeci, C.; Privitera, E.
2003-04-01
Late on the night of October 26, 2002, a bi-lateral eruption started on both the eastern and the southeastern flanks of Mt. Etna. The opening of the eruptive fracture system on the NE sector and the reactivation of the 2001 fracture system, on the S sector, were accompanied by a strong seismic swarm recorded between October 26 and 28 and by sharp increase of volcanic tremor amplitude. After this initial phase, on October 29 another seismogenetic zone became active in the SE sector of the volcano. At present (January 2003) the eruption is still in evolution. During the whole period a total of 862 earthquakes (Md≫1) was recorded by the local permanent seismic network run by INGV - Sezione di Catania. The maximum magnitude observed was Md=4.4. We focus our attention on 55 earthquakes with magnitude Md≫ 3.0. The dataset consists of accurate digital pickings of P- and S-phases including first-motion polarities. Firstly earthquakes were located using a 1D velocity model (Hirn et alii, 1991), then events were relocated by using two different 3D velocity models (Aloisi et alii, 2002; Patane et alii, 2002). Results indicate that most of earthquakes are located to the east of the Summit Craters and to northeast of them. Fault plane solutions (FPS) obtained show prevalent strike-slip rupture mechanisms. The suitable FPSs were considered for the application of Gephart and Forsyth`s algorithm in order to evaluate seismic stress field characteristics. Taking into account the preliminary results we propose a kinematic model of the eastern flank eastward movement in response of the intrusion processes in the central part of the volcano. References Aloisi M., Cocina O., Neri G., Orecchio B., Privitera E. (2002). Seismic tomography of the crust underneath the Etna volcano, Sicily. Physics of the Earth and Planetary Interiors 4154, pp. 1-17 Hirn A., Nercessian A., Sapin M., Ferrucci F., Wittlinger G. (1991). Seismic heterogeneity of Mt. Etna: structure and activity. Geophys. J. Int., 105, 139-153. Patane D., Chiarabba C., Cocina O., De Gori P., Moretti M., Boschi E. (2002). Tomographic images and 3D earthquake locations of the seismic swarm preceding the 2001 Mt. Etna eruption: Evidence for a dyke intrusion. Geophys. Res. Lett., 29, 10, 135-138.
Cluster Computing For Real Time Seismic Array Analysis.
NASA Astrophysics Data System (ADS)
Martini, M.; Giudicepietro, F.
A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by a pro- gram which reads data from disk files and send them to a remote host by using the Internet protocols.
Seismic imaging: From classical to adjoint tomography
NASA Astrophysics Data System (ADS)
Liu, Q.; Gu, Y. J.
2012-09-01
Seismic tomography has been a vital tool in probing the Earth's internal structure and enhancing our knowledge of dynamical processes in the Earth's crust and mantle. While various tomographic techniques differ in data types utilized (e.g., body vs. surface waves), data sensitivity (ray vs. finite-frequency approximations), and choices of model parameterization and regularization, most global mantle tomographic models agree well at long wavelengths, owing to the presence and typical dimensions of cold subducted oceanic lithospheres and hot, ascending mantle plumes (e.g., in central Pacific and Africa). Structures at relatively small length scales remain controversial, though, as will be discussed in this paper, they are becoming increasingly resolvable with the fast expanding global and regional seismic networks and improved forward modeling and inversion techniques. This review paper aims to provide an overview of classical tomography methods, key debates pertaining to the resolution of mantle tomographic models, as well as to highlight recent theoretical and computational advances in forward-modeling methods that spearheaded the developments in accurate computation of sensitivity kernels and adjoint tomography. The first part of the paper is devoted to traditional traveltime and waveform tomography. While these approaches established a firm foundation for global and regional seismic tomography, data coverage and the use of approximate sensitivity kernels remained as key limiting factors in the resolution of the targeted structures. In comparison to classical tomography, adjoint tomography takes advantage of full 3D numerical simulations in forward modeling and, in many ways, revolutionizes the seismic imaging of heterogeneous structures with strong velocity contrasts. For this reason, this review provides details of the implementation, resolution and potential challenges of adjoint tomography. Further discussions of techniques that are presently popular in seismic array analysis, such as noise correlation functions, receiver functions, inverse scattering imaging, and the adaptation of adjoint tomography to these different datasets highlight the promising future of seismic tomography.
In-situ Planetary Subsurface Imaging System
NASA Astrophysics Data System (ADS)
Song, W.; Weber, R. C.; Dimech, J. L.; Kedar, S.; Neal, C. R.; Siegler, M.
2017-12-01
Geophysical and seismic instruments are considered the most effective tools for studying the detailed global structures of planetary interiors. A planet's interior bears the geochemical markers of its evolutionary history, as well as its present state of activity, which has direct implications to habitability. On Earth, subsurface imaging often involves massive data collection from hundreds to thousands of geophysical sensors (seismic, acoustic, etc) followed by transfer by hard links or wirelessly to a central location for post processing and computing, which will not be possible in planetary environments due to imposed mission constraints on mass, power, and bandwidth. Emerging opportunities for geophysical exploration of the solar system from Venus to the icy Ocean Worlds of Jupiter and Saturn dictate that subsurface imaging of the deep interior will require substantial data reduction and processing in-situ. The Real-time In-situ Subsurface Imaging (RISI) technology is a mesh network that senses and processes geophysical signals. Instead of data collection then post processing, the mesh network performs the distributed data processing and computing in-situ, and generates an evolving 3D subsurface image in real-time that can be transmitted under bandwidth and resource constraints. Seismic imaging algorithms (including traveltime tomography, ambient noise imaging, and microseismic imaging) have been successfully developed and validated using both synthetic and real-world terrestrial seismic data sets. The prototype hardware system has been implemented and can be extended as a general field instrumentation platform tailored specifically for a wide variety of planetary uses, including crustal mapping, ice and ocean structure, and geothermal systems. The team is applying the RISI technology to real off-world seismic datasets. For example, the Lunar Seismic Profiling Experiment (LSPE) deployed during the Apollo 17 Moon mission consisted of four geophone instruments spaced up to 100 meters apart, which in essence forms a small aperture seismic network. A pattern recognition technique based on Hidden Markov Models was able to characterize this dataset, and we are exploring how the RISI technology can be adapted for this dataset.
Geophysical Analysis of Major Geothermal Anomalies in Romania
NASA Astrophysics Data System (ADS)
Panea, Ionelia; Mocanu, Victor
2017-11-01
The Romanian segment of the Eastern Pannonian Basin and the Moesian Platform are known for their geothermal and hydrocarbon-bearing structures. We used seismic, gravity, and geothermal data to analyze the geothermal behavior in the Oradea and Timisoara areas, from the Romanian segment of Eastern Pannonian Basin, and the Craiova-Bals-Optasi area, from the Moesian Platform. We processed 22 seismic reflection data sets recorded in the Oradea and Timisoara areas to obtain P-wave velocity distributions and time seismic sections. The P-wave velocity distributions correlate well with the structural trends observed along the seismic lines. We observed a good correlation between the high areas of crystalline basement seen on the time seismic sections and the high heat flow and gravity-anomaly values. For the Craiova-Bals-Optasi area, we computed a three-dimensional (3D) temperature model using calculated and measured temperature and geothermal gradient values in wells with an irregular distribution on the territory. The high temperatures from the Craiova-Bals-Optasi area correlate very well with the uplifted basement blocks seen on the time seismic sections and high gravity-anomaly values.
Kinematics of the New Madrid seismic zone, central United States, based on stepover models
Pratt, Thomas L.
2012-01-01
Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.
Fast 3D elastic micro-seismic source location using new GPU features
NASA Astrophysics Data System (ADS)
Xue, Qingfeng; Wang, Yibo; Chang, Xu
2016-12-01
In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin
2015-02-01
Seismic isolation (SI) has the potential to drastically reduce seismic response of structures, systems, or components (SSCs) and therefore the risk associated with large seismic events (large seismic event could be defined as the design basis earthquake (DBE) and/or the beyond design basis earthquake (BDBE) depending on the site location). This would correspond to a potential increase in nuclear safety by minimizing the structural response and thus minimizing the risk of material release during large seismic events that have uncertainty associated with their magnitude and frequency. The national consensus standard America Society of Civil Engineers (ASCE) Standard 4, Seismic Analysismore » of Safety Related Nuclear Structures recently incorporated language and commentary for seismically isolating a large light water reactor or similar large nuclear structure. Some potential benefits of SI are: 1) substantially decoupling the SSC from the earthquake hazard thus decreasing risk of material release during large earthquakes, 2) cost savings for the facility and/or equipment, and 3) applicability to both nuclear (current and next generation) and high hazard non-nuclear facilities. Issue: To date no one has evaluated how the benefit of seismic risk reduction reduces cost to construct a nuclear facility. Objective: Use seismic probabilistic risk assessment (SPRA) to evaluate the reduction in seismic risk and estimate potential cost savings of seismic isolation of a generic nuclear facility. This project would leverage ongoing Idaho National Laboratory (INL) activities that are developing advanced (SPRA) methods using Nonlinear Soil-Structure Interaction (NLSSI) analysis. Technical Approach: The proposed study is intended to obtain an estimate on the reduction in seismic risk and construction cost that might be achieved by seismically isolating a nuclear facility. The nuclear facility is a representative pressurized water reactor building nuclear power plant (NPP) structure. Figure 1: Project activities The study will consider a representative NPP reinforced concrete reactor building and representative plant safety system. This study will leverage existing research and development (R&D) activities at INL. Figure 1 shows the proposed study steps with the steps in blue representing activities already funded at INL and the steps in purple the activities that would be funded under this proposal. The following results will be documented: 1) Comparison of seismic risk for the non-seismically isolated (non-SI) and seismically isolated (SI) NPP, and 2) an estimate of construction cost savings when implementing SI at the site of the generic NPP.« less
Supercomputing resources empowering superstack with interactive and integrated systems
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2012-09-01
This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.
Real-time seismic monitoring and functionality assessment of a building
Celebi, M.; ,
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
Deaggregation of Probabilistic Ground Motions in the Central and Eastern United States
Harmsen, S.; Perkins, D.; Frankel, A.
1999-01-01
Probabilistic seismic hazard analysis (PSHA) is a technique for estimating the annual rate of exceedance of a specified ground motion at a site due to known and suspected earthquake sources. The relative contributions of the various sources to the total seismic hazard are determined as a function of their occurrence rates and their ground-motion potential. The separation of the exceedance contributions into bins whose base dimensions are magnitude and distance is called deaggregation. We have deaggregated the hazard analyses for the new USGS national probabilistic ground-motion hazard maps (Frankel et al., 1996). For points on a 0.2?? grid in the central and eastern United States (CEUS), we show color maps of the geographical variation of mean and modal magnitudes (M??, M??) and distances (D??, D??) for ground motions having a 2% chance of exceedance in 50 years. These maps are displayed for peak horizontal acceleration and for spectral response accelerations of 0.2, 0.3, and 1.0 sec. We tabulate M??, D??, M??, and D?? for 49 CEUS cities for 0.2- and 1.0-sec response. Thus, these maps and tables are PSHA-derived estimates of the potential earthquakes that dominate seismic hazard at short and intermediate periods in the CEUS. The contribution to hazard of the New Madrid and Charleston sources dominates over much of the CEUS; for 0.2-sec response, over 40% of the area; for 1.0-sec response, over 80% of the area. For 0.2-sec response, D?? ranges from 20 to 200 km, for 1.0 sec, 30 to 600 km. For sites influenced by New Madrid or Charleston, D is less than the distance to these sources, and M?? is less than the characteristic magnitude of these sources, because averaging takes into account the effect of smaller magnitude and closer sources. On the other hand, D?? is directly the distance to New Madrid or Charleston and M?? for 0.2- and 1.0-sec response corresponds to the dominating source over much of the CEUS. For some cities in the North Atlantic states, short-period seismic hazard is apt to be controlled by local seismicity, whereas intermediate period (1.0 sec) hazard is commonly controlled by regional seismicity, such as that of the Charlevoix seismic zone.
NASA Astrophysics Data System (ADS)
Griebel, Matt; Buleri, Christine; Baylor, Andrew; Gunnels, Steve; Hull, Charlie; Palunas, Povilas; Phillips, Mark
2016-07-01
The Magellan Telescopes are a set of twin 6.5 meter ground based optical/near-IR telescopes operated by the Carnegie Institution for Science at the Las Campanas Observatory (LCO) in Chile. The primary mirrors are f/1.25 paraboloids made of borosilicate glass and a honeycomb structure. The secondary mirror provides both f/11 and f/5 focal lengths with two Nasmyth, three auxiliary, and a Cassegrain port on the optical support structure (OSS). The telescopes have been in operation since 2000 and have experienced several small earthquakes with no damage. Measurement of in situ response of the telescopes to seismic events showed significant dynamic amplification, however, the response of the telescopes to a survival level earthquake, including component level forces, displacements, accelerations, and stresses were unknown. The telescopes are supported with hydrostatic bearings that can lift up under high seismic loading, thus causing a nonlinear response. For this reason, the typical response spectrum analysis performed to analyze a survival level seismic earthquake is not sufficient in determining the true response of the structure. Therefore, a nonlinear transient finite element analysis (FEA) of the telescope structure was performed to assess high risk areas and develop acceleration responses for future instrument design. Several configurations were considered combining different installed components and altitude pointing directions. A description of the models, methodology, and results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malme, C.I.; Miles, P.R.; Clark, C.W.
1984-08-01
The study supplements work performed during 1983 in the Monterey, California region in determining the degree of behavioral response of migrating gray whales to acoustic stimuli associated with oil and gas exploration and development activities. A computer-implemented trackline program analyzed the theodolite data for any possible changes in distance from shore, speed, linearity of track, orientation toward the sound source, and course heading of the whale group. A history of marine seismic exploration off California was compiled that showed no long-term relationship with growth rates in the gray whale population.
NASA Astrophysics Data System (ADS)
Abdel Raheem, Shehata E.; Ahmed, Mohamed M.; Alazrak, Tarek M. A.
2015-03-01
Soil conditions have a great deal to do with damage to structures during earthquakes. Hence the investigation on the energy transfer mechanism from soils to buildings during earthquakes is critical for the seismic design of multi-story buildings and for upgrading existing structures. Thus, the need for research into soil-structure interaction (SSI) problems is greater than ever. Moreover, recent studies show that the effects of SSI may be detrimental to the seismic response of structure and neglecting SSI in analysis may lead to un-conservative design. Despite this, the conventional design procedure usually involves assumption of fixity at the base of foundation neglecting the flexibility of the foundation, the compressibility of the underneath soil and, consequently, the effect of foundation settlement on further redistribution of bending moment and shear force demands. Hence the SSI analysis of multi-story buildings is the main focus of this research; the effects of SSI are analyzed for typical multi-story building resting on raft foundation. Three methods of analysis are used for seismic demands evaluation of the target moment-resistant frame buildings: equivalent static load; response spectrum methods and nonlinear time history analysis with suit of nine time history records. Three-dimensional FE model is constructed to investigate the effects of different soil conditions and number of stories on the vibration characteristics and seismic response demands of building structures. Numerical results obtained using SSI model with different soil conditions are compared to those corresponding to fixed-base support modeling assumption. The peak responses of story shear, story moment, story displacement, story drift, moments at beam ends, as well as force of inner columns are analyzed. The results of different analysis approaches are used to evaluate the advantages, limitations, and ease of application of each approach for seismic analysis.
NASA Astrophysics Data System (ADS)
Goebel, T.; Aminzadeh, F.
2015-12-01
The seismogenic response to induced pressure changes provides insight into the proximity to failure of faults close to injection sites. Here, we examine possible seismicity rate changes in response to wastewater disposal and enhanced oil recovery operations in hydrocarbon basins in California and Oklahoma. We test whether a statistically significant rate increase exists within these areas and determine the corresponding timing and location based on nonparametric modeling of background seismicity rates. Annual injection volumes increased monotonically since ~2001 in California and ~1998 in Oklahoma. While OK experienced a recent surge in seismic activity which exceeded the 95% confidence limit of a stationary Poisson process in ~2010, seismicity in CA showed no increase in background rates between 1980 and 2014. A systematic analysis of frequency-magnitude-distributions (FMDs) of likely induced earthquakes in OK indicates that FMDs are depleted in large-magnitude events. Seismicity in CA hydrocarbon basins, on the other hand, shows Gutenberg-Richter type FMDs and b~1. Moreover, the earthquakes and injection operations occur preferably in distinct areas in CA whereas in OK earthquakes occur closer to injection wells than expected from a random uniform process. To test whether injection operations may be responsible for the strongly different seismicity characteristics in CA and OK, we compare overall well density, wellhead pressures, peak and cumulative rates as well as injection depths. We find that average injection rates, pressures and volumes are comparable between CA and OK and that injection occurs on average 0.5 km deeper in CA than in OK. Thus, the here tested operational parameters can not easily explain the vastly different seismogenic response to injection operations in CA and OK, and may only be of secondary importance for the resulting earthquake activity. The potential to induce earthquakes by fluid injection operations is likely controlled by the specific geologic setting and stress state on nearby faults.
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
NASA Astrophysics Data System (ADS)
Sil, Arjun; Longmailai, Thaihamdau
2017-09-01
The lateral displacement of Reinforced Concrete (RC) frame building during an earthquake has an important impact on the structural stability and integrity. However, seismic analysis and design of RC building needs more concern due to its complex behavior as the performance of the structure links to the features of the system having many influencing parameters and other inherent uncertainties. The reliability approach takes into account the factors and uncertainty in design influencing the performance or response of the structure in which the safety level or the probability of failure could be ascertained. This present study, aims to assess the reliability of seismic performance of a four storey residential RC building seismically located in Zone-V as per the code provisions given in the Indian Standards IS: 1893-2002. The reliability assessment performed by deriving an explicit expression for maximum roof-lateral displacement as a failure function by regression method. A total of 319, four storey RC buildings were analyzed by linear static method using SAP2000. However, the change in the lateral-roof displacement with the variation of the parameters (column dimension, beam dimension, grade of concrete, floor height and total weight of the structure) was observed. A generalized relation established by regression method which could be used to estimate the expected lateral displacement owing to those selected parameters. A comparison made between the displacements obtained from analysis with that of the equation so formed. However, it shows that the proposed relation could be used directly to determine the expected maximum lateral displacement. The data obtained from the statistical computations was then used to obtain the probability of failure and the reliability.
High-resolution seismicity catalog of Italian peninsula in the period 1981-2015
NASA Astrophysics Data System (ADS)
Michele, M.; Latorre, D.; Castello, B.; Di Stefano, R.; Chiaraluce, L.
2017-12-01
In order to provide an updated reference catalog of Italian seismicity, the absolute location of the last 35 years (1981-2015) of seismic activity was computed with a three-dimensional VP and VS velocity model covering the whole Italian territory. The NonLinLoc code (Lomax et al., 2000), which is based on a probabilistic approach, was used to provide a complete and robust description of the uncertainties associated to the locations corresponding to the hypocentral solutions with the highest probability density. Moreover, the code using a finite difference approximation of the eikonal equation (Podvin and Lecomte, 1991), allows to manage very contrasted velocity models in the arrival time computation. To optimize the earthquakes location, we included the station corrections in the inverse problem. For each year, the number of available earthquakes depends on both the network detection capability and the occurrence of major seismic sequences. The starting earthquakes catalog was based on 2.6 million P and 1.9 million S arrival time picks for 278.607 selected earthquakes, recorded at least by 3 seismic stations of the Italian seismic network. The new catalog compared to the previous ones consisting of hypocentral locations retrieved with linearized location methods, shows a very good improvement as testified by the location parameters assessing the quality of the solution (i.e., RMS, azimuthal gap, formal error on horizontal and vertical components). In addition, we used the distance between the expected and the maximum likelihood hypocenter location to establish the unimodal (high-resolved location) or multimodal (poor-resolved location) character of the probability distribution. We used these parameters to classify the resulting locations in four classes (A, B, C and D) considering the simultaneous goodness of the previous parameters. The upper classes (A and B) include the 65% of the relocated earthquake, while the lowest class (D) only includes the 7% of the seismicity. We present the new catalog, consisting of 272.847 events, showing some example of earthquakes location related to the background as well as small to large seismic sequences occurred in Italy the last 35 years.
NASA Astrophysics Data System (ADS)
Havenith, Hans-Balder; Delvaux, Damien
2015-04-01
In the frame of the Belgian GeoRisCA multi-risk assessment project focused on the Kivu and Northern Tanganyika Region, a seismic hazard map has been produced for this area. It is based on a on a recently re-compiled catalogue using various local and global earthquake catalogues. The use of macroseismic epicenters determined from felt earthquakes allowed to extend the time-range back to the beginning of the 20th century, thus spanning about 100 years. The magnitudes have been homogenized to Mw and the coherence of the catalogue has been checked and validated. The seismo-tectonic zonation includes 10 seismic source areas that have been defined on the basis of the regional geological structure, neotectonic fault systems, basin architecture and distribution of earthquake epicenters. The seismic catalogue was filtered by removing obvious aftershocks and Gutenberg-Richter Laws were determined for each zone. On the basis of this seismo-tectonic information and existing attenuation laws that had been established by Twesigomwe (1997) and Mavonga et al. (2007) for this area, seismic hazard has been computed with the Crisis 2012 (Ordaz et al., 2012) software. The outputs of this assessment clearly show higher PGA values (for 475 years return period) along the Rift than the previous estimates by Twesigomwe (1997) and Mavonga (2007) while the same attenuation laws had been used. The main reason for these higher PGA values is likely to be related to the more detailed zonation of the Rift structure marked by a strong gradient of the seismicity from outside the rift zone to the inside. Mavonga, T. (2007). An estimate of the attenuation relationship for the strong ground motion in the Kivu Province, Western Rift Valley of Africa. Physics of the Earth and Planetary Interiors 62, 13-21. Ordaz M, Martinelli F, Aguilar A, Arboleda J, Meletti C, D'Amico V. (2012). CRISIS 2012, Program for computing seismic hazard. Instituto de Ingeniería, Universidad Nacional Autónoma de México. Twesigomwe, E. (1997). Probabilistic seismic hazard assessment of Uganda, Ph.D. Thesis, Dept. of Physics, Makare University, Uganda.
Novel bio-inspired smart control for hazard mitigation of civil structures
NASA Astrophysics Data System (ADS)
Kim, Yeesock; Kim, Changwon; Langari, Reza
2010-11-01
In this paper, a new bio-inspired controller is proposed for vibration mitigation of smart structures subjected to ground disturbances (i.e. earthquakes). The control system is developed through the integration of a brain emotional learning (BEL) algorithm with a proportional-integral-derivative (PID) controller and a semiactive inversion (Inv) algorithm. The BEL algorithm is based on the neurologically inspired computational model of the amygdala and the orbitofrontal cortex. To demonstrate the effectiveness of the proposed hybrid BEL-PID-Inv control algorithm, a seismically excited building structure equipped with a magnetorheological (MR) damper is investigated. The performance of the proposed hybrid BEL-PID-Inv control algorithm is compared with that of passive, PID, linear quadratic Gaussian (LQG), and BEL control systems. In the simulation, the robustness of the hybrid BEL-PID-Inv control algorithm in the presence of modeling uncertainties as well as external disturbances is investigated. It is shown that the proposed hybrid BEL-PID-Inv control algorithm is effective in improving the dynamic responses of seismically excited building structure-MR damper systems.
Ravazzoli, C L; Santos, J E; Carcione, J M
2003-04-01
We investigate the acoustic and mechanical properties of a reservoir sandstone saturated by two immiscible hydrocarbon fluids, under different saturations and pressure conditions. The modeling of static and dynamic deformation processes in porous rocks saturated by immiscible fluids depends on many parameters such as, for instance, porosity, permeability, pore fluid, fluid saturation, fluid pressures, capillary pressure, and effective stress. We use a formulation based on an extension of Biot's theory, which allows us to compute the coefficients of the stress-strain relations and the equations of motion in terms of the properties of the single phases at the in situ conditions. The dry-rock moduli are obtained from laboratory measurements for variable confining pressures. We obtain the bulk compressibilities, the effective pressure, and the ultrasonic phase velocities and quality factors for different saturations and pore-fluid pressures ranging from normal to abnormally high values. The objective is to relate the seismic and ultrasonic velocity and attenuation to the microstructural properties and pressure conditions of the reservoir. The problem has an application in the field of seismic exploration for predicting pore-fluid pressures and saturation regimes.
NASA Astrophysics Data System (ADS)
Schleicher, L.; Pratt, T. L.
2017-12-01
Underlying sediment can amplify ground motions during earthquakes, making site response estimates key components in seismic evaluations for building infrastructure. The horizontal-to-vertical spectral ratio (HVSR) method, using either earthquake signals or ambient noise as input, is an appealing method for estimating site response because it uses only a single seismic station rather than requiring two or more seismometers traditionally used to compute a horizontal sediment-to-bedrock spectral ratio (SBSR). A number of studies have had mixed results when comparing the accuracy of the HVSR versus SBSR methods for identifying the frequencies and amplitudes of the primary resonance peaks. Many of these studies have been carried out in areas of complex geology, such as basins with structures that can introduce 3D effects. Here we assess the effectiveness of the HVSR method by a comparison with the SBSR method and models of transfer functions in an area dominated by a flat and thin, unconsolidated sediment layer over bedrock, which should be an ideal setting for using the HVSR method. In this preliminary study, we analyze teleseismic and regional earthquake recordings from a temporary seismometer array deployed throughout Washington, DC, which is underlain by a wedge of 0 to 270 m thick layer of unconsolidated Atlantic Coastal Plain sedimentary strata. At most sites, we find a close match in the amplitudes and frequencies of large resonance peaks in horizontal ground motions at frequencies of 0.7 to 5 Hz in site response estimates using the HVSR and SBSR methods. Amplitudes of the HVSRs tend to be slightly lower than SBSRs at 3 Hz and less, but the amplitudes of the fundamental resonance peaks often match closely. The results suggest that the HVSR method could be a successful approach to consider for computing site response estimates in areas of simple shallow geology consisting of thin sedimentary layers with a strong reflector at the underlying bedrock surface. [This publication represents the views of the authors and does not necessarily represent the views of the Defense Nuclear Facilities Safety Board.
NASA Astrophysics Data System (ADS)
Carcione, José M.; Poletto, Flavio; Farina, Biancamaria; Bellezza, Cinzia
2018-06-01
Seismic propagation in the upper part of the crust, where geothermal reservoirs are located, shows generally strong velocity dispersion and attenuation due to varying permeability and saturation conditions and is affected by the brittleness and/or ductility of the rocks, including zones of partial melting. From the elastic-plastic aspect, the seismic properties (seismic velocity, quality factor and density) depend on effective pressure and temperature. We describe the related effects with a Burgers mechanical element for the shear modulus of the dry-rock frame. The Arrhenius equation combined to the octahedral stress criterion define the Burgers viscosity responsible of the brittle-ductile behaviour. The effects of permeability, partial saturation, varying porosity and mineral composition on the seismic properties is described by a generalization of the White mesoscopic-loss model to the case of a distribution of heterogeneities of those properties. White model involves the wave-induced fluid flow attenuation mechanism, by which seismic waves propagating through small-scale heterogeneities, induce pressure gradients between regions of dissimilar properties, where part of the energy of the fast P-wave is converted to slow P (Biot)-wave. We consider a range of variations of the radius and size of the patches and thin layers whose probability density function is defined by different distributions. The White models used here are that of spherical patches (for partial saturation) and thin layers (for permeability heterogeneities). The complex bulk modulus of the composite medium is obtained with the Voigt-Reuss-Hill average. Effective pressure effects are taken into account by using exponential functions. We then solve the 3D equation of motion in the space-time domain, by approximating the White complex bulk modulus with that of a set of Zener elements connected in series. The Burgers and generalized Zener models allows us to solve the equations with a direct grid method by the introduction of memory variables. The algorithm uses the Fourier pseudospectral method to compute the spatial derivatives. It is tested against an analytical solution obtained with the correspondence principle. We consider two main cases, namely the same rock frame (uniform porosity and permeability) saturated with water and a distribution of steam patches, and water-saturated background medium with thin layers of dissimilar permeability. Our model indicates how seismic properties change with the geothermal reservoir temperature and pressure, showing that both seismic velocity and attenuation can be used as a diagnostic tool to estimate the in situ conditions.
Monitoring of seismic time-series with advanced parallel computational tools and complex networks
NASA Astrophysics Data System (ADS)
Kechaidou, M.; Sirakoulis, G. Ch.; Scordilis, E. M.
2012-04-01
Earthquakes have been in the focus of human and research interest for several centuries due to their catastrophic effect to the everyday life as they occur almost all over the world demonstrating a hard to be modelled unpredictable behaviour. On the other hand, their monitoring with more or less technological updated instruments has been almost continuous and thanks to this fact several mathematical models have been presented and proposed so far to describe possible connections and patterns found in the resulting seismological time-series. Especially, in Greece, one of the most seismically active territories on earth, detailed instrumental seismological data are available from the beginning of the past century providing the researchers with valuable and differential knowledge about the seismicity levels all over the country. Considering available powerful parallel computational tools, such as Cellular Automata, these data can be further successfully analysed and, most important, modelled to provide possible connections between different parameters of the under study seismic time-series. More specifically, Cellular Automata have been proven very effective to compose and model nonlinear complex systems resulting in the advancement of several corresponding models as possible analogues of earthquake fault dynamics. In this work preliminary results of modelling of the seismic time-series with the help of Cellular Automata so as to compose and develop the corresponding complex networks are presented. The proposed methodology will be able to reveal under condition hidden relations as found in the examined time-series and to distinguish the intrinsic time-series characteristics in an effort to transform the examined time-series to complex networks and graphically represent their evolvement in the time-space. Consequently, based on the presented results, the proposed model will eventually serve as a possible efficient flexible computational tool to provide a generic understanding of the possible triggering mechanisms as arrived from the adequately monitoring and modelling of the regional earthquake phenomena.
Development of Procedures for Computing Site Seismicity
1993-02-01
surface wave magnitude when in the range of 5 to 7.5. REFERENCES Ambraseys, N.N. (1970). "Some characteristic features of the Anatolian fault zone...geology seismicity and environmental impact, Association of Engineering Geologists , Special Publication. Los Angeles, CA, University Publishers, 1973... Geologists ) Recurrenc.e Recurrence Slip Intervals (yr) at Intervals (yr) over Fault Rate Length a Point on Fault Length of Fault (cm/yI) (km) (Rý) (R
Fast kinematic ray tracing of first- and later-arriving global seismic phases
NASA Astrophysics Data System (ADS)
Bijwaard, Harmen; Spakman, Wim
1999-11-01
We have developed a ray tracing algorithm that traces first- and later-arriving global seismic phases precisely (traveltime errors of the order of 0.1 s), and with great computational efficiency (15 rays s- 1). To achieve this, we have extended and adapted two existing ray tracing techniques: a graph method and a perturbation method. The two resulting algorithms are able to trace (critically) refracted, (multiply) reflected, some diffracted (Pdiff), and (multiply) converted seismic phases in a 3-D spherical geometry, thus including the largest part of seismic phases that are commonly observed on seismograms. We have tested and compared the two methods in 2-D and 3-D Cartesian and spherical models, for which both algorithms have yielded precise paths and traveltimes. These tests indicate that only the perturbation method is computationally efficient enough to perform 3-D ray tracing on global data sets of several million phases. To demonstrate its potential for non-linear tomography, we have applied the ray perturbation algorithm to a data set of 7.6 million P and pP phases used by Bijwaard et al. (1998) for linearized tomography. This showed that the expected heterogeneity within the Earth's mantle leads to significant non-linear effects on traveltimes for 10 per cent of the applied phases.
Seismic Hazard and Ground Motion Characterization at the Itoiz Dam (Northern Spain)
NASA Astrophysics Data System (ADS)
Rivas-Medina, A.; Santoyo, M. A.; Luzón, F.; Benito, B.; Gaspar-Escribano, J. M.; García-Jerez, A.
2012-08-01
This paper presents a new hazard-consistent ground motion characterization of the Itoiz dam site, located in Northern Spain. Firstly, we propose a methodology with different approximation levels to the expected ground motion at the dam site. Secondly, we apply this methodology taking into account the particular characteristics of the site and of the dam. Hazard calculations were performed following the Probabilistic Seismic Hazard Assessment method using a logic tree, which accounts for different seismic source zonings and different ground-motion attenuation relationships. The study was done in terms of peak ground acceleration and several spectral accelerations of periods coinciding with the fundamental vibration periods of the dam. In order to estimate these ground motions we consider two different dam conditions: when the dam is empty ( T = 0.1 s) and when it is filled with water to its maximum capacity ( T = 0.22 s). Additionally, seismic hazard analysis is done for two return periods: 975 years, related to the project earthquake, and 4,975 years, identified with an extreme event. Soil conditions were also taken into account at the site of the dam. Through the proposed methodology we deal with different forms of characterizing ground motion at the study site. In a first step, we obtain the uniform hazard response spectra for the two return periods. In a second step, a disaggregation analysis is done in order to obtain the controlling earthquakes that can affect the dam. Subsequently, we characterize the ground motion at the dam site in terms of specific response spectra for target motions defined by the expected values SA ( T) of T = 0.1 and 0.22 s for the return periods of 975 and 4,975 years, respectively. Finally, synthetic acceleration time histories for earthquake events matching the controlling parameters are generated using the discrete wave-number method and subsequently analyzed. Because of the short relative distances between the controlling earthquakes and the dam site we considered finite sources in these computations. We conclude that directivity effects should be taken into account as an important variable in this kind of studies for ground motion characteristics.
NASA Astrophysics Data System (ADS)
Carmichael, J. D.; Joughin, I. R.; Behn, M. D.; Das, S. B.; Lizarralde, D.
2012-12-01
We present seismic observations assembled from 3+ years of melt season measurements collected near seasonally-draining supraglacial lakes on the Greenland Ice-sheet (68.7311,-49.5925). On transient time scales (< 1 day), these data include a record of seismic response coincident with at least three documented lake drainage events. During a particular event, drainage is preceded by two hours of impulsive high-energy seismic signals, followed by the onset of continuous broadband signals (2-50Hz) that we interpret as surface-to-bed meltwater transfer. This drainage is followed additional transient icequakes similar in timing and energy to the precursory activity. Over a seasonal time scale (> 1 month), our data records a transition in seismicity between two distinct modes, with one mode characterized by relative quiescence, and the other mode characterized by uniform energy that is observed network-wide as a continuous, repetitive signal. The transition between modes is abrupt (~ 2 hours) and is observed using multiple seismic discriminants. We interpret this rapid transition as reflecting the evolution of the morphology of a basal drainage system as it responds to melt input. This interpretation is tested against additional geophysical observations that include temperature-based melt models, satellite imagery, and GPS measurements. Finally, we outline and advocate a routine for monitoring icesheet seismicity with a focus on distinguishing surface from basal sources.
Radiated Seismic Energy of Earthquakes in the South-Central Region of the Gulf of California, Mexico
NASA Astrophysics Data System (ADS)
Castro, Raúl R.; Mendoza-Camberos, Antonio; Pérez-Vertti, Arturo
2018-05-01
We estimated the radiated seismic energy (ES) of 65 earthquakes located in the south-central region of the Gulf of California. Most of these events occurred along active transform faults that define the Pacific-North America plate boundary and have magnitudes between M3.3 and M5.9. We corrected the spectral records for attenuation using nonparametric S-wave attenuation functions determined with the whole data set. The path effects were isolated from the seismic source using a spectral inversion. We computed radiated seismic energy of the earthquakes by integrating the square velocity source spectrum and estimated their apparent stresses. We found that most events have apparent stress between 3 × 10-4 and 3 MPa. Model independent estimates of the ratio between seismic energy and moment (ES/M0) indicates that this ratio is independent of earthquake size. We conclude that in general the apparent stress is low (σa < 3 MPa) in the south-central and southern Gulf of California.
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Procedures for Computing Site Seismicity
1994-02-01
Fourth World Conference on Earthquake Engineering, Santiago, Chile , 1969. Schnabel, P.B., J. Lysmer, and H.B. Seed (1972). SHAKE, a computer program for...This fault system is composed of the Elsinore and Whittier fault zones, Agua Caliente fault, and Earthquake Valley fault. Five recent earthquakes of
NASA Astrophysics Data System (ADS)
Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.
2017-12-01
The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension estimates from both methods generally agree. We observe higher corner frequencies and higher stress drops (ranging from 20 to 70 MPa) typical of intraplate seismicity in comparison with interplate seismicity. We follow similar approaches to studying 25 MN 3+ events reported in the WQSZ using data recorded by the CNSN and USArray Transportable Array.
NASA Astrophysics Data System (ADS)
Herrmann, M.; Kraft, T.; Tormann, T.; Scarabello, L.; Wiemer, S.
2017-12-01
Induced seismicity at the site of the Basel Enhanced Geothermal System (EGS) continuously decayed for six years after injection had been stopped in December 2006. Starting in May 2012, the Swiss Seismological Service was detecting a renewed increase of induced seismicity in the EGS reservoir to levels last seen in 2007 and reaching magnitudes up to ML2.0. Seismic monitoring at this EGS site is running for more than ten years now, but the details of the long-term behavior of its induced seismicity remained unexplored because a seismic event catalog that is consistent in detection sensitivity and magnitude estimation did not exist.We have created such a catalog by applying our matched filter detector to the 11-year-long seismic recordings of a borehole station at 2.7km depth. Based on 3'600 located earthquakes of the operator's borehole-network catalog, we selected about 2'500 reasonably dissimilar templates using waveform clustering. This large template set ensures an adequate coverage of the diversity of event waveforms which is due to the reservoir's highly complex fault system and the close observation distance. To cope with the increased computational demand of scanning 11-years of data with 2'500 templates, we parallelized our detector to run on a high-performance computer of the Swiss National Supercomputing Centre.We detect more than 200'000 events down to ML-2.5 during the six-day-long stimulation in December 2006 alone. Previously, only 13'000 detections found by an amplitude-threshold-based detector were known for this period. The high temporal and spatial resolution of this new catalog allows us to analyze the statistics of the induced Basel earthquakes in great detail. We resolve spatio-temporal variations of the seismicity parameters (a- and b-value) that have not been identified before and derive the first high-resolution temporal evolution of the seismic hazard for the Basel EGS reservoir.In summer 2017, our detector monitored the 10-week pressure reduction operation at the Basel-1 borehole during which the well was periodically opened. The detections drove a traffic light system based on magnitude thresholds and earthquake rates. For future EGS projects in Switzerland, our detector is planned to run in near real-time and provide the basis for an advanced traffic light system.
New seismic study begins in Puerto Rico
Tarr, A.C.
1974-01-01
A new seismological project is now underway in Puerto Rico to provide information needed for accurate assessment of the island's seismic hazard. The project should also help to increase understanding of the tectonics and geologic evolution of the Caribbean region. The Puerto Rico Seismic Program is being conducted by the Geological Survey with support provided by the Puerto Rico Water Resources Authority, an agency responsible for generation and distribution of electric power throughout the Commonwealth. The Program will include the installation of a network of high quality seismograph stations to monitor seismic activity on and around Puerto Rico. These stations will be distributed across the island to record the seismicity as uniformly as possible. The detection and accurate location of small earthquakes, as well as moderate magnitude shocks, will aid in mapping active seismic zones and in compiling frequency of occurrence statistics which ultimately wil be useful in seismic risk-zoning of hte island.
Brenguier, F; Campillo, M; Takeda, T; Aoki, Y; Shapiro, N M; Briand, X; Emoto, K; Miyake, H
2014-07-04
Volcanic eruptions are caused by the release of pressure that has accumulated due to hot volcanic fluids at depth. Here, we show that the extent of the regions affected by pressurized fluids can be imaged through the measurement of their response to transient stress perturbations. We used records of seismic noise from the Japanese Hi-net seismic network to measure the crustal seismic velocity changes below volcanic regions caused by the 2011 moment magnitude (M(w)) 9.0 Tohoku-Oki earthquake. We interpret coseismic crustal seismic velocity reductions as related to the mechanical weakening of the pressurized crust by the dynamic stress associated with the seismic waves. We suggest, therefore, that mapping seismic velocity susceptibility to dynamic stress perturbations can be used for the imaging and characterization of volcanic systems. Copyright © 2014, American Association for the Advancement of Science.
Assessment of seismic hazard in the North Caucasus
NASA Astrophysics Data System (ADS)
Ulomov, V. I.; Danilova, T. I.; Medvedeva, N. S.; Polyakova, T. P.; Shumilina, L. S.
2007-07-01
The seismicity of the North Caucasus is the highest in the European part of Russia. The detection of potential seismic sources here and long-term prediction of earthquakes are extremely important for the assessment of seismic hazard and seismic risk in this densely populated and industrially developed region of the country. The seismogenic structures of the Iran-Caucasus-Anatolia and Central Asia regions, adjacent to European Russia, are the subjects of this study. These structures are responsible for the specific features of regional seismicity and for the geodynamic interaction with adjacent areas of the Scythian and Turan platforms. The most probable potential sources of earthquakes with magnitudes M = 7.0 ± 0.2 and 7.5 ± 0.2 in the North Caucasus are located. The possible macroseismic effect of one of them is assessed.
Analysis of longitudinal seismic response of bridge with magneto-rheological elastomeric bearings
NASA Astrophysics Data System (ADS)
Li, Rui; Li, Xi; Wu, Yueyuan; Chen, Shiwei; Wang, Xiaojie
2016-04-01
As the weakest part in the bridge system, traditional bridge bearing is incapable of isolating the impact load such as earthquake. A magneto-rheological elastomeric bearing (MRB) with adjustable stiffness and damping parameters is designed, tested and modeled. The developed Bouc-Wen model is adopted to represent the constitutive relation and force-displacement behavior of an MRB. Then, the lead rubber bearing (LRB), passive MRB and controllable MRB are modeled by finite element method (FEM). Furthermore, two typical seismic waves are adopted as inputs for the isolation system of bridge seismic response. The experiments are carried out to investigate the different response along the bridge with on-off controlled MRBs. The results show that the isolating performance of MRB is similar to that of traditional LRB, which ensures the fail-safe capability of bridge with MRBs under seismic excitation. In addition, the controllable bridge with MRBs demonstrated the advantage of isolating capacity and energy dissipation, because it restrains the acceleration peak of bridge beam by 33.3%, and the displacement of bearing decrease by 34.1%. The shear force of the pier top is also alleviated.
Real-time Seismic Alert System of NIED
NASA Astrophysics Data System (ADS)
Horiuchi, S.; Fujinawa, Y.; Negishi, H.; Matsumoto, T.; Fujiwara, H.; Kunugi, T.; Hayashi, Y.
2001-12-01
An extensive seismic network has been constructed nationwide composed of hi-sensitivity seismographic network, broadband seismographic network and strong motion seismographic network. All these data from some 3,000 sites belonging to NIED, JMA and universities are to be accumulated and distributed through NIED to any scientists and engineering through INTERNET under the coordination of the National Seismic Research Committee of MEXT. As a practical application of those data we are now developing a real-time seismic alert information system for the purpose of providing short-term warning of imminent strong grounds motions from major earthquakes from several seconds to a few days. The contents of information are seismic focal parameters (several seconds), seismic fault plane solutions (some 10 seconds), after-shock activities (several minutes-a few days ). The fundamental fault parameters are used to build specific information at sites for particular users for use of triggering automated and /or half-automated responses. The most important application is an immediate estimate of expected shaking distribution and damages in a district using synthetic database and site effects for local governments to initial proper measures of hazard mitigation. Another application is estimation of arrival time and shaking strength at any individual site for human lives to be safeguarded. The system could also start an automatic electrical isolation and protection of computer systems, protection of hazardous chronic systems, transportation systems and so on. The information are corrected successively as seismic ground motion are received at a larger number of sites in time with the result that more accurate and more sophisticated earthquake information is transmitted to any user. Besides the rapid determination of seismic parameters, one of essential items in this alert system is the data transmission means. The data transmission is chosen to assure negligibly small delay of data transmission and inexpensive cost under the condition of very small data quantity. For the imminent information transmission the leased line is the most suitable because of short time delay of less than 0.1 second without any interference from other sources. But it is very expensive because of much infrequent occasions of hazardous earthquakes for particular users. Another means is to use the modified packet transfer communication. It is characterized by reasonable costs and small time delay of order in 1 second. For information transmission to several hundreds thousand of users, the satellite data broadcast would be one of practical solutions. Data are expected to reach with time loss of some 2 seconds including one-hop time delay of some 0.5 second to the satellite. The system will start to be experimented in 2002 for evaluation of the whole system including rapid seismic parameter calculations, data transmissions, automated processes and particular safeguard actions for several chosen users.
NASA Astrophysics Data System (ADS)
De Siena, Luca; Rawlinson, Nicholas
2016-04-01
Non-standard seismic imaging (velocity, attenuation, and scattering tomography) of the North Sea basins by using unexploited seismic intensities from previous passive and active surveys are key for better imaging and monitoring fluid under the subsurface. These intensities provide unique solutions to the problem of locating/tracking gas/fluid movements in the crust and depicting sub-basalt and sub-intrusives in volcanic reservoirs. The proposed techniques have been tested in volcanic Islands (Deception Island) and have been proved effective at monitoring fracture opening, imaging buried fluid-filled bodies, and tracking water/gas interfaces. These novel seismic attributes are modelled in space and time and connected with the lithology of the sampled medium, specifically density and permeability with as key output a novel computational code with strong commercial potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DC Hartshorn, SP Reidel, AC Rohay
1998-10-23
Hanford Seismic Monitoring provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network (HSN) for the U.S. Department of Energy and its contractors. The staff also locates aud identifies sources of seismic activity and monitors changes in the hi~orical pattern of seismic activity at the Hanford Site. The data are. compiled archived, and published for use by the Hanford Site for waste management Natural Phenomena Hazards assessments, and engineering design and construction. In addition, the seismic monitoring organization works with the Hanford Site Emergency Services Organization to provide assistance in the event of zinmore » earthquake on the Hanford Site. The HSN and Ihe Eastern Washington Regional Network (EN/RN) consist-of 42 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Monitoring staff. The operational rate for the third quarter of FY 1998 for stations in the HSN was 99.99%. The operational rate for the third quarter of FY 1998 for stations of the EWRN was 99.95%. For the third quarter of FY 1998, the acquisition computer triggered 133 times. Of these triggers 11 were local earthquakes: 5 (45Yo) in the Columbia River Basalt Group, 2(1 8%) in the pre-basalt sediments, and 4 (36%) in the crystalline basement. The geologic and tectonic environments where these earthquakes occurred are discussed in this report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
DC Hartshorn, SP Reidel, AC Rohay.
1998-10-23
Hanford Seismic Monitoring provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network (HSN) for the U.S. Department of Energy and its contractors. The staff also locates aud identifies sources of seismic activity and monitors changes in the hi orical pattern of seismic activity at the Hanford Site. The data are. compiled archived, and published for use by the Hanford Site for waste management Natural Phenomena Hazards assessments, and engineering design and construction. In addition, the seismic monitoring organization works with the Hanford Site Emergency Services Organization to provide assistance in the event ofmore » zin earthquake on the Hanford Site. The HSN and Ihe Eastern Washington Regional Network (EN/RN) consist-of 42 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Monitoring staff. The operational rate for the third quarter of FY 1998 for stations in the HSN was 99.99%. The operational rate for the third quarter of FY 1998 for stations of the EWRN was 99.95%. For the third quarter of FY 1998, the acquisition computer triggered 133 times. Of these triggers 11 were local earthquakes: 5 (45Yo) in the Columbia River Basalt Group, 2(1 8%) in the pre-basalt sediments, and 4 (36%) in the crystalline basement. The geologic and tectonic environments where these earthquakes occurred are discussed in this report.« less
Extreme magnitude earthquakes and their economical impact: The Mexico City case
NASA Astrophysics Data System (ADS)
Chavez, M.; Mario, C.
2005-12-01
The consequences (estimated by the human and economical losses) of the recent occurrence (worldwide) of extreme magnitude (for the region under consideration) earthquakes, such as the 19 09 1985 in Mexico (Ritchter magnitude Ms 8.1, moment magnitude Mw 8.01), or the one in Indonesia of the 26 12 2004 (Ms 9.4, Mw 9.3), stress the importance of performing seismic hazard analysis that, specifically, incorporate this possibility. Herewith, we present and apply a methodology, based on plausible extreme seismic scenarios and the computation of their associated synthetic accelerograms, to estimate the seismic hazard on Mexico City (MC) stiff and compressible surficial soils. The uncertainties about the characteristics of the potential finite seismic sources, as well as those related to the dynamic properties of MC compressible soils are taken into account. The economic consequences (i.e. the seismic risk = seismic hazard x economic cost) implicit in the seismic coefficients proposed in MC seismic Codes before (1976) and after the 1985 earthquake (2004) are analyzed. Based on the latter and on an acceptable risk criterion, a maximum seismic coefficient (MSC) of 1.4g (g = 9.81m/s2) of the elastic acceleration design spectra (5 percent damping), which has a probability of exceedance of 2.4 x 10-4, seems to be appropriate for analyzing the seismic behavior of infrastructure located on MC compressible soils, if extreme Mw 8.5 subduction thrust mechanism earthquakes (similar to the one occurred on 19 09 1985 with an observed, equivalent, MSC of 1g) occurred in the next 50 years.
Community Seismic Network (CSN)
NASA Astrophysics Data System (ADS)
Clayton, R. W.; Heaton, T. H.; Kohler, M. D.; Cheng, M.; Guy, R.; Chandy, M.; Krause, A.; Bunn, J.; Olson, M.; Faulkner, M.; Liu, A.; Strand, L.
2012-12-01
We report on developments in sensor connectivity, architecture, and data fusion algorithms executed in Cloud computing systems in the Community Seismic Network (CSN), a network of low-cost sensors housed in homes and offices by volunteers in the Pasadena, CA area. The network has over 200 sensors continuously reporting anomalies in local acceleration through the Internet to a Cloud computing service (the Google App Engine) that continually fuses sensor data to rapidly detect shaking from earthquakes. The Cloud computing system consists of data centers geographically distributed across the continent and is likely to be resilient even during earthquakes and other local disasters. The region of Southern California is partitioned in a multi-grid style into sets of telescoping cells called geocells. Data streams from sensors within a geocell are fused to detect anomalous shaking across the geocell. Temporal spatial patterns across geocells are used to detect anomalies across regions. The challenge is to detect earthquakes rapidly with an extremely low false positive rate. We report on two data fusion algorithms, one that tessellates the surface so as to fuse data from a large region around Pasadena and the other, which uses a standard tessellation of equal-sized cells. Since September 2011, the network has successfully detected earthquakes of magnitude 2.5 or higher within 40 Km of Pasadena. In addition to the standard USB device, which connects to the host's computer, we have developed a stand-alone sensor that directly connects to the internet via Ethernet or wifi. This bypasses security concerns that some companies have with the USB-connected devices, and allows for 24/7 monitoring at sites that would otherwise shut down their computers after working hours. In buildings we use the sensors to model the behavior of the structures during weak events in order to understand how they will perform during strong events. Visualization models of instrumented buildings ranging between five and 22 stories tall have been constructed using Google SketchUp. Ambient vibration records are used to identify the first set of horizontal vibrational modal frequencies of the buildings. These frequencies are used to compute the response on every floor of the building, given either observed data or scenario ground motion input at the buildings' base.
Combined seismic plus live-load analysis of highway bridges.
DOT National Transportation Integrated Search
2011-10-01
"The combination of seismic and vehicle live loadings on bridges is an important design consideration. There are well-established design : provisions for how the individual loadings affect bridge response: structural components that carry vertical li...
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
NASA Astrophysics Data System (ADS)
Ruiz, M.; Galve, A.; Monfret, T.; Sapin, M.; Charvis, P.; Laigle, M.; Evain, M.; Hirn, A.; Flueh, E.; Gallart, J.; Diaz, J.; Lebrun, J. F.
2013-09-01
This work focuses on the analysis of a unique set of seismological data recorded by two temporary networks of seismometers deployed onshore and offshore in the Central Lesser Antilles Island Arc from Martinique to Guadeloupe islands. During the whole recording period, extending from January to the end of August 2007, more than 1300 local seismic events were detected in this area. A subset of 769 earthquakes was located precisely by using HypoEllipse. We also computed focal mechanisms using P-wave polarities of the best azimuthally constrained earthquakes. We detected earthquakes beneath the Caribbean forearc and in the Atlantic oceanic plate as well. At depth seismicity delineates the Wadati-Benioff Zone down to 170 km depth. The main seismic activity is concentrated in the lower crust and in the mantle wedge, close to the island arc beneath an inner forearc domain in comparison to an outer forearc domain where little seismicity is observed. We propose that the difference of the seismicity beneath the inner and the outer forearc is related to a difference of crustal structure between the inner forearc interpreted as a dense, thick and rigid crustal block and the lighter and more flexible outer forearc. Seismicity is enhanced beneath the inner forearc because it likely increases the vertical stress applied to the subducting plate.
Wood, W.T.; Hart, P.E.; Hutchinson, D.R.; Dutta, N.; Snyder, F.; Coffin, R.B.; Gettrust, J.F.
2008-01-01
To determine the impact of seeps and focused flow on the occurrence of shallow gas hydrates, several seafloor mounds in the Atwater Valley lease area of the Gulf of Mexico were surveyed with a wide range of seismic frequencies. Seismic data were acquired with a deep-towed, Helmholz resonator source (220-820 Hz); a high-resolution, Generator-Injector air-gun (30-300 Hz); and an industrial air-gun array (10-130 Hz). Each showed a significantly different response in this weakly reflective, highly faulted area. Seismic modeling and observations of reversed-polarity reflections and small scale diffractions are consistent with a model of methane transport dominated regionally by diffusion but punctuated by intense upward advection responsible for the bathymetric mounds, as well as likely advection along pervasive filamentous fractures away from the mounds.
Passive Seismic for Hydrocarbon Indicator : Between Expectation and Reality
NASA Astrophysics Data System (ADS)
Pandito, Riky H. B.
2018-03-01
In between 5 – 10 years, in our country, passive seismic method became more popular to finding hydrocarbon. Low price, nondestructive acquisition and easy to mobilization is the best reason for choose the method. But in the other part, some people are pessimistically to deal with the result. Instrument specification, data condition and processing methods is several points which influence characteristic and interpretation passive seismic result. In 2010 one prospect in East Java Basin has been measurement constist of 112 objective points and several calibration points. Data measurement results indicate a positive response. Furthermore, in 2013 exploration drliing conducted on the prospect. Drill steam test showes 22 MMCFD in objective zone, upper – late oligocene. In 2015, remeasurement taken in objective area and show consistent responses with previous measurement. Passive seismic is unique method, sometimes will have difference results on dry, gas and oil area, in field production and also temporary suspend area with hidrocarbon content.
Source-independent full waveform inversion of seismic data
Lee, Ki Ha
2006-02-14
A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.
Miller, John J.; Agena, W.F.; Lee, M.W.; Zihlman, F.N.; Grow, J.A.; Taylor, D.J.; Killgore, Michele; Oliver, H.L.
2000-01-01
This CD-ROM contains stacked, migrated, 2-Dimensional seismic reflection data and associated support information for 22 regional seismic lines (3,470 line-miles) recorded in the National Petroleum Reserve ? Alaska (NPRA) from 1974 through 1981. Together, these lines constitute about one-quarter of the seismic data collected as part of the Federal Government?s program to evaluate the petroleum potential of the Reserve. The regional lines, which form a grid covering the entire NPRA, were created by combining various individual lines recorded in different years using different recording parameters. These data were reprocessed by the USGS using modern, post-stack processing techniques, to create a data set suitable for interpretation on interactive seismic interpretation computer workstations. Reprocessing was done in support of ongoing petroleum resource studies by the USGS Energy Program. The CD-ROM contains the following files: 1) 22 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 22 lines in standard SEG-P1 format; 3) 22 small scale graphic images of each seismic line in Adobe Acrobat? PDF format; 4) a graphic image of the location map, generated from the navigation file, with hyperlinks to the graphic images of the seismic lines; 5) an ASCII text file with cross-reference information for relating the sequential trace numbers on each regional line to the line number and shotpoint number of the original component lines; and 6) an explanation of the processing used to create the final seismic sections (this document). The SEG-Y format seismic files and SEG-P1 format navigation file contain all the information necessary for loading the data onto a seismic interpretation workstation.
The Effect Analysis of Strain Rate on Power Transmission Tower-Line System under Seismic Excitation
Wang, Wenming
2014-01-01
The effect analysis of strain rate on power transmission tower-line system under seismic excitation is studied in this paper. A three-dimensional finite element model of a transmission tower-line system is created based on a real project. Using theoretical analysis and numerical simulation, incremental dynamic analysis of the power transmission tower-line system is conducted to investigate the effect of strain rate on the nonlinear responses of the transmission tower and line. The results show that the effect of strain rate on the transmission tower generally decreases the maximum top displacements, but it would increase the maximum base shear forces, and thus it is necessary to consider the effect of strain rate on the seismic analysis of the transmission tower. The effect of strain rate could be ignored for the seismic analysis of the conductors and ground lines, but the responses of the ground lines considering strain rate effect are larger than those of the conductors. The results could provide a reference for the seismic design of the transmission tower-line system. PMID:25105157
Seismic Hazard Maps for the Maltese Archipelago: Preliminary Results
NASA Astrophysics Data System (ADS)
D'Amico, S.; Panzera, F.; Galea, P. M.
2013-12-01
The Maltese islands form an archipelago of three major islands lying in the Sicily channel at about 140 km south of Sicily and 300 km north of Libya. So far very few investigations have been carried out on seismicity around the Maltese islands and no maps of seismic hazard for the archipelago are available. Assessing the seismic hazard for the region is currently of prime interest for the near-future development of industrial and touristic facilities as well as for urban expansion. A culture of seismic risk awareness has never really been developed in the country, and the public perception is that the islands are relatively safe, and that any earthquake phenomena are mild and infrequent. However, the Archipelago has been struck by several moderate/large events. Although recent constructions of a certain structural and strategic importance have been built according to high engineering standards, the same probably cannot be said for all residential buildings, many higher than 3 storeys, which have mushroomed rapidly in recent years. Such buildings are mostly of unreinforced masonry, with heavy concrete floor slabs, which are known to be highly vulnerable to even moderate ground shaking. We can surely state that in this context planning and design should be based on available national hazard maps. Unfortunately, these kinds of maps are not available for the Maltese islands. In this paper we attempt to compute a first and preliminary probabilistic seismic hazard assessment of the Maltese islands in terms of Peak Ground Acceleration (PGA) and Spectral Acceleration (SA) at different periods. Seismic hazard has been computed using the Esteva-Cornell (1968) approach which is the most widely utilized probabilistic method. It is a zone-dependent approach: seismotectonic and geological data are used coupled with earthquake catalogues to identify seismogenic zones within which earthquakes occur at certain rates. Therefore the earthquake catalogues can be reduced to the activity rate, the b-value of the Gutenberg-Richter relationship and an estimate of the maximum magnitude. In this article we also defined a new seismogenic zones in the central Mediterranean never considered before. In order to determine the ground motion parameters related to a specified probability of exceedance, the above statistical parameters are combined with ground motion prediction equations. Seismic hazard computations have been performed within the island boundaries. The preliminary maps for PGA distribution on rock site obtained for a 10% probability of exceedance shows values ranging between 0.09-0.18 g whereas, SA for 0.2, 04, 1.0 s show values of about 0.21-0.40 g, 0.14-0.24 g and 0.05-0.08 g respectively.
NASA Astrophysics Data System (ADS)
Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.
2017-12-01
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years of continuous seismic record by the Alaskan permanent seismic network and Hi-Climb trans-Himalayan seismic network. The processing chain we developed also opens the possibility for a near-real time seismic detection of landslides, in association with remote-sensing automated detection from Sentinel 2 images for example.
Seismic hazard map of the western hemisphere
Shedlock, K.M.; Tanner, J.G.
1999-01-01
Vulnerability to natural disasters increases with urbanization and development of associated support systems (reservoirs, power plants, etc.). Catastrophic earthquakes account for 60% of worldwide casualties associated with natural disasters. Economic damage from earthquakes is increasing, even in technologically advanced countries with some level of seismic zonation, as shown by the 1989 Loma Prieta, CA ($6 billion), 1994 Northridge, CA ($ 25 billion), and 1995 Kobe, Japan (> $ 100 billion) earthquakes. The growth of megacities in seismically active regions around the world often includes the construction of seismically unsafe buildings and infrastructures, due to an insufficient knowledge of existing seismic hazard. Minimization of the loss of life, property damage, and social and economic disruption due to earthquakes depends on reliable estimates of seismic hazard. National, state, and local governments, decision makers, engineers, planners, emergency response organizations, builders, universities, and the general public require seismic hazard estimates for land use planning, improved building design and construction (including adoption of building construction codes), emergency response preparedness plans, economic forecasts, housing and employment decisions, and many more types of risk mitigation. The seismic hazard map of the Americas is the concatenation of various national and regional maps, involving a suite of approaches. The combined maps and documentation provide a useful global seismic hazard framework and serve as a resource for any national or regional agency for further detailed studies applicable to their needs. This seismic hazard map depicts Peak Ground Acceleration (PGA) with a 10% chance of exceedance in 50 years for the western hemisphere. PGA, a short-period ground motion parameter that is proportional to force, is the most commonly mapped ground motion parameter because current building codes that include seismic provisions specify the horizontal force a building should be able to withstand during an earthquake. This seismic hazard map of the Americas depicts the likely level of short-period ground motion from earthquakes in a fifty-year window. Short-period ground motions effect short-period structures (e.g., one-to-two story buildings). The largest seismic hazard values in the western hemisphere generally occur in areas that have been, or are likely to be, the sites of the largest plate boundary earthquakes. Although the largest earthquakes ever recorded are the 1960 Chile and 1964 Alaska subduction zone earthquakes, the largest seismic hazard (PGA) value in the Americas is in Southern California (U.S.), along the San Andreas fault.
NASA Astrophysics Data System (ADS)
Su, Chin-Kuo; Sung, Yu-Chi; Chang, Shuenn-Yih; Huang, Chao-Hsun
2007-09-01
Strong near-fault ground motion, usually caused by the fault-rupture and characterized by a pulse-like velocity-wave form, often causes dramatic instantaneous seismic energy (Jadhav and Jangid 2006). Some reinforced concrete (RC) bridge columns, even those built according to ductile design principles, were damaged in the 1999 Chi-Chi earthquake. Thus, it is very important to evaluate the seismic response of a RC bridge column to improve its seismic design and prevent future damage. Nonlinear time history analysis using step-by-step integration is capable of tracing the dynamic response of a structure during the entire vibration period and is able to accommodate the pulsing wave form. However, the accuracy of the numerical results is very sensitive to the modeling of the nonlinear load-deformation relationship of the structural member. FEMA 273 and ATC-40 provide the modeling parameters for structural nonlinear analyses of RC beams and RC columns. They use three parameters to define the plastic rotation angles and a residual strength ratio to describe the nonlinear load-deformation relationship of an RC member. Structural nonlinear analyses are performed based on these parameters. This method provides a convenient way to obtain the nonlinear seismic responses of RC structures. However, the accuracy of the numerical solutions might be further improved. For this purpose, results from a previous study on modeling of the static pushover analyses for RC bridge columns (Sung et al. 2005) is adopted for the nonlinear time history analysis presented herein to evaluate the structural responses excited by a near-fault ground motion. To ensure the reliability of this approach, the numerical results were compared to experimental results. The results confirm that the proposed approach is valid.
The behavioural response of migrating humpback whales to a full seismic airgun array.
Dunlop, Rebecca A; Noad, Michael J; McCauley, Robert D; Kniest, Eric; Slade, Robert; Paton, David; Cato, Douglas H
2017-12-20
Despite concerns on the effects of noise from seismic survey airguns on marine organisms, there remains uncertainty as to the biological significance of any response. This study quantifies and interprets the response of migrating humpback whales ( Megaptera novaeangliae ) to a 3130 in 3 (51.3l) commercial airgun array. We compare the behavioural responses to active trials (array operational; n = 34 whale groups), with responses to control trials (source vessel towing the array while silent; n = 33) and baseline studies of normal behaviour in the absence of the vessel ( n = 85). No abnormal behaviours were recorded during the trials. However, in response to the active seismic array and the controls , the whales displayed changes in behaviour. Changes in respiration rate were of a similar magnitude to changes in baseline groups being joined by other animals suggesting any change group energetics was within their behavioural repertoire. However, the reduced progression southwards in response to the active treatments, for some cohorts, was below typical migratory speeds. This response was more likely to occur within 4 km from the array at received levels over 135 dB re 1 µPa 2 s. © 2017 The Author(s).
Real-time Seismic Amplitude Measurement (RSAM): a volcano monitoring and prediction tool
Endo, E.T.; Murray, T.
1991-01-01
Seismicity is one of the most commonly monitored phenomena used to determine the state of a volcano and for the prediction of volcanic eruptions. Although several real-time earthquake-detection and data acquisition systems exist, few continuously measure seismic amplitude in circumstances where individual events are difficult to recognize or where volcanic tremor is prevalent. Analog seismic records provide a quick visual overview of activity; however, continuous rapid quantitative analysis to define the intensity of seismic activity for the purpose of predicing volcanic eruptions is not always possible because of clipping that results from the limited dynamic range of analog recorders. At the Cascades Volcano Observatory, an inexpensive 8-bit analog-to-digital system controlled by a laptop computer is used to provide 1-min average-amplitude information from eight telemetered seismic stations. The absolute voltage level for each station is digitized, averaged, and appended in near real-time to a data file on a multiuser computer system. Raw realtime seismic amplitude measurement (RSAM) data or transformed RSAM data are then plotted on a common time base with other available volcano-monitoring information such as tilt. Changes in earthquake activity associated with dome-building episodes, weather, and instrumental difficulties are recognized as distinct patterns in the RSAM data set. RSAM data for domebuilding episodes gradually develop into exponential increases that terminate just before the time of magma extrusion. Mount St. Helens crater earthquakes show up as isolated spikes on amplitude plots for crater seismic stations but seldom for more distant stations. Weather-related noise shows up as low-level, long-term disturbances on all seismic stations, regardless of distance from the volcano. Implemented in mid-1985, the RSAM system has proved valuable in providing up-to-date information on seismic activity for three Mount St. Helens eruptive episodes from 1985 to 1986 (May 1985, May 1986, and October 1986). Tiltmeter data, the only other telemetered geophysical information that was available for the three dome-building episodes, is compared to RSAM data to show that the increase in RSAM data was related to the transport of magma to the surface. Thus, if tiltmeter data is not available, RSAM data can be used to predict future magmatic eruptions at Mount St. Helens. We also recognize the limitations of RSAm data. Two examples of RSAM data associated with phreatic or shallow phreatomagmatic explosions were not preceded by the same increases in RSAM data or changes in tilt associated with the three dome-building eruptions. ?? 1991 Springer-Verlag.
Interval-type and affine arithmetic-type techniques for handling uncertainty in expert systems
NASA Astrophysics Data System (ADS)
Ceberio, Martine; Kreinovich, Vladik; Chopra, Sanjeev; Longpre, Luc; Nguyen, Hung T.; Ludascher, Bertram; Baral, Chitta
2007-02-01
Expert knowledge consists of statements Sj (facts and rules). The facts and rules are often only true with some probability. For example, if we are interested in oil, we should look at seismic data. If in 90% of the cases, the seismic data were indeed helpful in locating oil, then we can say that if we are interested in oil, then with probability 90% it is helpful to look at the seismic data. In more formal terms, we can say that the implication "if oil then seismic" holds with probability 90%. Another example: a bank A trusts a client B, so if we trust the bank A, we should trust B too; if statistically this trust was justified in 99% of the cases, we can conclude that the corresponding implication holds with probability 99%. If a query Q is deducible from facts and rules, what is the resulting probability p(Q) in Q? We can describe the truth of Q as a propositional formula F in terms of Sj, i.e., as a combination of statements Sj linked by operators like &, [logical or], and [not sign]; computing p(Q) exactly is NP-hard, so heuristics are needed. Traditionally, expert systems use technique similar to straightforward interval computations: we parse F and replace each computation step with corresponding probability operation. Problem: at each step, we ignore the dependence between the intermediate results Fj; hence intervals are too wide. Example: the estimate for P(A[logical or][not sign]A) is not 1. Solution: similar to affine arithmetic, besides P(Fj), we also compute P(Fj&Fi) (or P(Fj1&...&Fjd)), and on each step, use all combinations of l such probabilities to get new estimates. Results: e.g., P(A[logical or][not sign]A) is estimated as 1.
Rapid determination of the energy magnitude Me
NASA Astrophysics Data System (ADS)
di Giacomo, D.; Parolai, S.; Bormann, P.; Grosser, H.; Saul, J.; Wang, R.; Zschau, J.
2009-12-01
The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake’s damage potential. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy Es (e.g. Bormann et al., 2002): Me = 2/3(log10 Es - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. We analyse teleseismic broadband P-waves signals in the distance range 20°-98° to calculate Es. To correct the frequency-dependent energy loss experienced by the P-waves during the propagation path, we use pre-calculated spectral amplitude decay functions for different frequencies obtained from numerical simulations of Green’s functions (Wang, 1999) given the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake’s origin time, even in case of great earthquakes. We tested our procedure for a large dataset composed by about 770 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of the seismic source, and a combined use of these two magnitude scales would allow a better assessment of the tsunami and shaking potential of an earthquake. Representative case studies will be also shown and discussed. References Bormann, P., Baumbach, M., Bock, G., Grosser, H., Choy, G. L., and Boatwright, J. (2002). Seismic sources and source parameters, in IASPEI New Manual of Seismological Observatory Practice, P. Bormann (Editor), Vol. 1, GeoForschungsZentrum, Potsdam, Chapter 3, 1-94. Choy, G. L., and Kirby, S. (2004). Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones. Geophys. J. Int., 159, 991-1012. Kennett, B. L. N., Engdahl, E. R., and Buland, R. (1995). Constraints on seismic velocities in the Earth from traveltimes. Geophys. J. Int., 122, 108-124. Montagner, J.-P., and Kennett, B. L. N. (1996). How to reconcile body-wave and normal-mode reference Earth models?. Geophys. J. Int., 125, 229-248. Wang, R. (1999). A simple orthonormalization method for stable and efficient computation of Green’s functions. Bull. Seism. Soc. Am., 89(3), 733-741.
Seismic hazard estimation of northern Iran using smoothed seismicity
NASA Astrophysics Data System (ADS)
Khoshnevis, Naeem; Taborda, Ricardo; Azizzadeh-Roodpish, Shima; Cramer, Chris H.
2017-07-01
This article presents a seismic hazard assessment for northern Iran, where a smoothed seismicity approach has been used in combination with an updated seismic catalog and a ground motion prediction equation recently found to yield good fit with data. We evaluate the hazard over a geographical area including the seismic zones of Azerbaijan, the Alborz Mountain Range, and Kopeh-Dagh, as well as parts of other neighboring seismic zones that fall within our region of interest. In the chosen approach, seismic events are not assigned to specific faults but assumed to be potential seismogenic sources distributed within regular grid cells. After performing the corresponding magnitude conversions, we decluster both historical and instrumental seismicity catalogs to obtain earthquake rates based on the number of events within each cell, and smooth the results to account for the uncertainty in the spatial distribution of future earthquakes. Seismicity parameters are computed for each seismic zone separately, and for the entire region of interest as a single uniform seismotectonic region. In the analysis, we consider uncertainties in the ground motion prediction equation, the seismicity parameters, and combine the resulting models using a logic tree. The results are presented in terms of expected peak ground acceleration (PGA) maps and hazard curves at selected locations, considering exceedance probabilities of 2 and 10% in 50 years for rock site conditions. According to our results, the highest levels of hazard are observed west of the North Tabriz and east of the North Alborz faults, where expected PGA values are between about 0.5 and 1 g for 10 and 2% probability of exceedance in 50 years, respectively. We analyze our results in light of similar estimates available in the literature and offer our perspective on the differences observed. We find our results to be helpful in understanding seismic hazard for northern Iran, but recognize that additional efforts are necessary to obtain more robust estimates at specific areas of interest and different site conditions.
Systems for low frequency seismic and infrasound detection of geo-pressure transition zones
Shook, G. Michael; LeRoy, Samuel D.; Benzing, William M.
2007-10-16
Methods for determining the existence and characteristics of a gradational pressurized zone within a subterranean formation are disclosed. One embodiment involves employing an attenuation relationship between a seismic response signal and increasing wavelet wavelength, which relationship may be used to detect a gradational pressurized zone and/or determine characteristics thereof. In another embodiment, a method for analyzing data contained within a response signal for signal characteristics that may change in relation to the distance between an input signal source and the gradational pressurized zone is disclosed. In a further embodiment, the relationship between response signal wavelet frequency and comparative amplitude may be used to estimate an optimal wavelet wavelength or range of wavelengths used for data processing or input signal selection. Systems for seismic exploration and data analysis for practicing the above-mentioned method embodiments are also disclosed.
Origin of the pulse-like signature of shallow long-period volcano seismicity
Chouet, Bernard A.; Dawson, Phillip B.
2016-01-01
Short-duration, pulse-like long-period (LP) events are a characteristic type of seismicity accompanying eruptive activity at Mount Etna in Italy in 2004 and 2008 and at Turrialba Volcano in Costa Rica and Ubinas Volcano in Peru in 2009. We use the discrete wave number method to compute the free surface response in the near field of a rectangular tensile crack embedded in a homogeneous elastic half space and to gain insights into the origin of the LP pulses. Two source models are considered, including (1) a vertical fluid-driven crack and (2) a unilateral tensile rupture growing at a fixed sub-Rayleigh velocity with constant opening on a vertical crack. We apply cross correlation to the synthetics and data to demonstrate that a fluid-driven crack provides a natural explanation for these data with realistic source sizes and fluid properties. Our modeling points to shallow sources (<1 km depth), whose signatures are representative of the Rayleigh pulse sampled at epicentral distances >∼1 km. While a slow-rupture failure provides another potential model for these events, the synthetics and resulting fits to the data are not optimal in this model compared to a fluid-driven source. We infer that pulse-like LP signatures are parts of the continuum of responses produced by shallow fluid-driven sources in volcanoes.
Seismic Performance Evaluation of Reinforced Concrete Frames Subjected to Seismic Loads
NASA Astrophysics Data System (ADS)
Zameeruddin, Mohd.; Sangle, Keshav K.
2017-06-01
Ten storied-3 bays reinforced concrete bare frame designed for gravity loads following the guidelines of IS 456 and IS 13920 for ductility is subjected to seismic loads. The seismic demands on this building were calculated by following IS 1893 for response spectra of 5% damping (for hard soil type). Plastic hinges were assigned to the beam and column at both ends to represent the failure mode, when member yields. Non-linear static (pushover) analysis was performed to evaluate the performance of the building in reference to first (ATC 40), second (FEMA 356) and next-generation (FEMA 440) performance based seismic design procedures. Base shear against top displacement curve of structure, known as pushover curve was obtained for two actions of plastic hinge behavior, force-controlled (brittle) and deformation-controlled (ductile) actions. Lateral deformation corresponding to performance point proves the building capability to sustain a certain level of seismic loads. The failure is represented by a sequence of formation of plastic hinges. Deformation-controlled action of hinges showed that building behaves like strong-column-weak-beam mechanism, whereas force-controlled action showed formation of hinges in the column. The study aims to understand the first, second and next generation performance based design procedure in prediction of actual building responses and their conservatism into the acceptance criteria.
Seismic performance for vertical geometric irregularity frame structures
NASA Astrophysics Data System (ADS)
Ismail, R.; Mahmud, N. A.; Ishak, I. S.
2018-04-01
This research highlights the result of vertical geometric irregularity frame structures. The aid of finite element analysis software, LUSAS was used to analyse seismic performance by focusing particularly on type of irregular frame on the differences in height floors and continued in the middle of the building. Malaysia’s building structures were affected once the earthquake took place in the neighbouring country such as Indonesia (Sumatera Island). In Malaysia, concrete is widely used in building construction and limited tension resistance to prevent it. Analysing structural behavior with horizontal and vertical static load is commonly analyses by using the Plane Frame Analysis. The case study of this research is to determine the stress and displacement in the seismic response under this type of irregular frame structures. This study is based on seven-storey building of Clinical Training Centre located in Sungai Buloh, Selayang, Selangor. Since the largest earthquake occurs in Acheh, Indonesia on December 26, 2004, the data was recorded and used in conducting this research. The result of stress and displacement using IMPlus seismic analysis in LUSAS Modeller Software under the seismic response of a formwork frame system states that the building is safe to withstand the ground and in good condition under the variation of seismic performance.
NASA Astrophysics Data System (ADS)
Chaillat, S.; Bonnet, M.; Semblat, J.
2007-12-01
Seismic wave propagation and amplification in complex media is a major issue in the field of seismology. To compute seismic wave propagation in complex geological structures such as in alluvial basins, various numerical methods have been proposed. The main advantage of the Boundary Element Method (BEM) is that only the domain boundaries (and possibly interfaces) are discretized, leading to a reduction of the number of degrees of freedom. The main drawback of the standard BEM is that the governing matrix is full and non- symmetric, which gives rise to high computational and memory costs. In other areas where the BEM is used (electromagnetism, acoustics), considerable speedup of solution time and decrease of memory requirements have been achieved through the development, over the last decade, of the Fast Multipole Method (FMM). The goal of the FMM is to speed up the matrix-vector product computation needed at each iteration of the GMRES iterative solver. Moreover, the governing matrix is never explicitly formed, which leads to a storage requirement well below the memory necessary for holding the complete matrix. The FMM-accelerated BEM therefore achieves substantial savings in both CPU time and memory. In this work, the FMM is extended to the 3-D frequency-domain elastodynamics and applied to the computation of seismic wave propagation in 3-D. The efficiency of the present FMM-BEM is demonstrated on seismology- oriented examples. First, the diffraction of a plane wave or a point source by a 3-D canyon is studied. The influence of the size of the meshed part of the free surface is studied, and computations are performed for non- dimensional frequencies higher than those considered in other studies (thanks to the use of the FM-BEM), with which comparisons are made whenever possible. The method is also applied to analyze the diffraction of a plane wave or a point source by a 3-D alluvial basin. A parametrical study is performed on the effect of the shape of the basin and the interaction of the wavefield with the basin edges is analyzed.
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano
Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.
Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.
2013-12-01
We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.
RMT focal plane sensitivity to seismic network geometry and faulting style
Johnson, Kendra L.; Hayes, Gavin; Herrmann, Robert B.; Benz, Harley M.; McNamara, Daniel E.; Bergman, Eric A.
2016-01-01
Modern tectonic studies often use regional moment tensors (RMTs) to interpret the seismotectonic framework of an earthquake or earthquake sequence; however, despite extensive use, little existing work addresses RMT parameter uncertainty. Here, we quantify how network geometry and faulting style affect RMT sensitivity. We examine how data-model fits change with fault plane geometry (strike and dip) for varying station configurations. We calculate the relative data fit for incrementally varying geometries about a best-fitting solution, applying our workflow to real and synthetic seismograms for both real and hypothetical station distributions and earthquakes. Initially, we conduct purely observational tests, computing RMTs from synthetic seismograms for hypothetical earthquakes and a series of well-behaved network geometries. We then incorporate real data and station distributions from the International Maule Aftershock Deployment (IMAD), which recorded aftershocks of the 2010 MW 8.8 Maule earthquake, and a set of regional stations capturing the ongoing earthquake sequence in Oklahoma and southern Kansas. We consider RMTs computed under three scenarios: (1) real seismic records selected for high data quality; (2) synthetic seismic records with noise computed for the observed source-station pairings and (3) synthetic seismic records with noise computed for all possible station-source pairings. To assess RMT sensitivity for each test, we observe the ‘fit falloff’, which portrays how relative fit changes when strike or dip varies incrementally; we then derive the ranges of acceptable strikes and dips by identifying the span of solutions with relative fits larger than 90 per cent of the best fit. For the azimuthally incomplete IMAD network, Scenario 3 best constrains fault geometry, with average ranges of 45° and 31° for strike and dip, respectively. In Oklahoma, Scenario 3 best constrains fault dip with an average range of 46°; however, strike is best constrained by Scenario 1, with a range of 26°. We draw two main conclusions from this study. (1) Station distribution impacts our ability to constrain RMTs using waveform time-series; however, in some tectonic settings, faulting style also plays a significant role and (2) increasing station density and data quantity (both the number of stations and the number of individual channels) does not necessarily improve RMT constraint. These results may be useful when organizing future seismic deployments (e.g. by concentrating stations in alignment with anticipated nodal planes), and in computing RMTs, either by guiding a more rigorous data selection process for input data or informing variable weighting among the selected data (e.g. by eliminating the transverse component when strike-slip mechanisms are expected).
NASA Astrophysics Data System (ADS)
Zhao, L.; Chen, P.; Jordan, T. H.; Olsen, K. B.; Maechling, P.; Faerman, M.
2004-12-01
The Southern California Earthquake Center (SCEC) is developing a Community Modeling Environment (CME) to facilitate the computational pathways of physics-based seismic hazard analysis (Maechling et al., this meeting). Major goals are to facilitate the forward modeling of seismic wavefields in complex geologic environments, including the strong ground motions that cause earthquake damage, and the inversion of observed waveform data for improved models of Earth structure and fault rupture. Here we report on a unified approach to these coupled inverse problems that is based on the ability to generate and manipulate wavefields in densely gridded 3D Earth models. A main element of this approach is a database of receiver Green tensors (RGT) for the seismic stations, which comprises all of the spatial-temporal displacement fields produced by the three orthogonal unit impulsive point forces acting at each of the station locations. Once the RGT database is established, synthetic seismograms for any earthquake can be simply calculated by extracting a small, source-centered volume of the RGT from the database and applying the reciprocity principle. The partial derivatives needed for point- and finite-source inversions can be generated in the same way. Moreover, the RGT database can be employed in full-wave tomographic inversions launched from a 3D starting model, because the sensitivity (Fréchet) kernels for travel-time and amplitude anomalies observed at seismic stations in the database can be computed by convolving the earthquake-induced displacement field with the station RGTs. We illustrate all elements of this unified analysis with an RGT database for 33 stations of the California Integrated Seismic Network in and around the Los Angeles Basin, which we computed for the 3D SCEC Community Velocity Model (SCEC CVM3.0) using a fourth-order staggered-grid finite-difference code. For a spatial grid spacing of 200 m and a time resolution of 10 ms, the calculations took ~19,000 node-hours on the Linux cluster at USC's High-Performance Computing Center. The 33-station database with a volume of ~23.5 TB was archived in the SCEC digital library at the San Diego Supercomputer Center using the Storage Resource Broker (SRB). From a laptop, anyone with access to this SRB collection can compute synthetic seismograms for an arbitrary source in the CVM in a matter of minutes. Efficient approaches have been implemented to use this RGT database in the inversions of waveforms for centroid and finite moment tensors and tomographic inversions to improve the CVM. Our experience with these large problems suggests areas where the cyberinfrastructure currently available for geoscience computation needs to be improved.
NASA Astrophysics Data System (ADS)
Krueger, Hannah E.; Wirth, Erin A.
2017-10-01
The Cascadia subduction zone exhibits along-strike segmentation in structure, processes, and seismogenic behavior. While characterization of seismic anisotropy can constrain deformation processes at depth, the character of seismic anisotropy in Cascadia remains poorly understood. This is primarily due to a lack of seismicity in the subducting Juan de Fuca slab, which limits shear wave splitting and other seismological analyses that interrogate the fine-scale anisotropic structure of the crust and mantle wedge. We investigate lower crustal anisotropy and mantle wedge structure by computing P-to-S receiver functions at 12 broadband seismic stations along the Cascadia subduction zone. We observe P-to-SV converted energy consistent with previously estimated Moho depths. Several stations exhibit evidence of an "inverted Moho" (i.e., a downward velocity decrease across the crust-mantle boundary), indicative of a serpentinized mantle wedge. Stations with an underlying hydrated mantle wedge appear prevalent from northern Washington to central Oregon, but sparse in southern Oregon and northern California. Transverse component receiver functions are complex, suggesting anisotropic and/or dipping crustal structure. To constrain the orientation of crustal anisotropy we compute synthetic receiver functions using manual forward modeling. We determine that the lower crust shows variable orientations of anisotropy along-strike, with highly complex anisotropy in northern Cascadia, and generally NW-SE and NE-SW orientations of slow-axis anisotropy in central and southern Cascadia, respectively. The orientations of anisotropy from this work generally agree with those inferred from shear wave splitting of tremor studies at similar locations, lending confidence to this relatively new method of inferring seismic anisotropy from slow earthquakes.
Determination and uncertainty of moment tensors for microearthquakes at Okmok Volcano, Alaska
Pesicek, J.D.; Sileny, J.; Prejean, S.G.; Thurber, C.H.
2012-01-01
Efforts to determine general moment tensors (MTs) for microearthquakes in volcanic areas are often hampered by small seismic networks, which can lead to poorly constrained hypocentres and inadequate modelling of seismic velocity heterogeneity. In addition, noisy seismic signals can make it difficult to identify phase arrivals correctly for small magnitude events. However, small volcanic earthquakes can have source mechanisms that deviate from brittle double-couple shear failure due to magmatic and/or hydrothermal processes. Thus, determining reliable MTs in such conditions is a challenging but potentially rewarding pursuit. We pursued such a goal at Okmok Volcano, Alaska, which erupted recently in 1997 and in 2008. The Alaska Volcano Observatory operates a seismic network of 12 stations at Okmok and routinely catalogues recorded seismicity. Using these data, we have determined general MTs for seven microearthquakes recorded between 2004 and 2007 by inverting peak amplitude measurements of P and S phases. We computed Green's functions using precisely relocated hypocentres and a 3-D velocity model. We thoroughly assessed the quality of the solutions by computing formal uncertainty estimates, conducting a variety of synthetic and sensitivity tests, and by comparing the MTs to solutions obtained using alternative methods. The results show that MTs are sensitive to station distribution and errors in the data, velocity model and hypocentral parameters. Although each of the seven MTs contains a significant non-shear component, we judge several of the solutions to be unreliable. However, several reliable MTs are obtained for a group of previously identified repeating events, and are interpreted as compensated linear-vector dipole events.
Pre-processing ambient noise cross-correlations with equalizing the covariance matrix eigenspectrum
NASA Astrophysics Data System (ADS)
Seydoux, Léonard; de Rosny, Julien; Shapiro, Nikolai M.
2017-09-01
Passive imaging techniques from ambient seismic noise requires a nearly isotropic distribution of the noise sources in order to ensure reliable traveltime measurements between seismic stations. However, real ambient seismic noise often partially fulfils this condition. It is generated in preferential areas (in deep ocean or near continental shores), and some highly coherent pulse-like signals may be present in the data such as those generated by earthquakes. Several pre-processing techniques have been developed in order to attenuate the directional and deterministic behaviour of this real ambient noise. Most of them are applied to individual seismograms before cross-correlation computation. The most widely used techniques are the spectral whitening and temporal smoothing of the individual seismic traces. We here propose an additional pre-processing to be used together with the classical ones, which is based on the spatial analysis of the seismic wavefield. We compute the cross-spectra between all available stations pairs in spectral domain, leading to the data covariance matrix. We apply a one-bit normalization to the covariance matrix eigenspectrum before extracting the cross-correlations in the time domain. The efficiency of the method is shown with several numerical tests. We apply the method to the data collected by the USArray, when the M8.8 Maule earthquake occurred on 2010 February 27. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable traveltime measurement even in the presence of the earthquake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasan, M.G.; Kot, C.A.; Mojtahed, M.
The paper describes the analytical modeling, calculations, and results of the posttest nonlinear simulation of high-level seismic testing of the VKL piping system at the HDR Test Facility in Germany. One of the objectives of the tests was to evaluate analytical methods for calculating the nonlinear response of realistic piping systems subjected to high-level seismic excitation that would induce significant plastic deformation. Two out of the six different pipe-support configurations, (ranging from a stiff system with struts and snubbers to a very flexible system with practically no seismic supports), subjected to simulated earthquakes, were tested at very high levels. Themore » posttest nonlinear calculations cover the KWU configuration, a reasonably compliant system with only rigid struts. Responses for 800% safe-shutdown-earthquake loading were calculated using the NONPIPE code. The responses calculated with NONPIPE were found generally to have the same time trends as the measurements but contained under-, over-, and correct estimates of peak values, almost in equal proportions. The only exceptions were the peak strut forces, which were underestimated as a group. The scatter in the peak value estimate of displacements and strut forces was smaller than that for the strains. The possible reasons for the differences and the effort on further analysis are discussed.« less
Nonlinear dynamic failure process of tunnel-fault system in response to strong seismic event
NASA Astrophysics Data System (ADS)
Yang, Zhihua; Lan, Hengxing; Zhang, Yongshuang; Gao, Xing; Li, Langping
2013-03-01
Strong earthquakes and faults have significant effect on the stability capability of underground tunnel structures. This study used a 3-Dimensional Discrete Element model and the real records of ground motion in the Wenchuan earthquake to investigate the dynamic response of tunnel-fault system. The typical tunnel-fault system was composed of one planned railway tunnel and one seismically active fault. The discrete numerical model was prudentially calibrated by means of the comparison between the field survey and numerical results of ground motion. It was then used to examine the detailed quantitative information on the dynamic response characteristics of tunnel-fault system, including stress distribution, strain, vibration velocity and tunnel failure process. The intensive tunnel-fault interaction during seismic loading induces the dramatic stress redistribution and stress concentration in the intersection of tunnel and fault. The tunnel-fault system behavior is characterized by the complicated nonlinear dynamic failure process in response to a real strong seismic event. It can be qualitatively divided into 5 main stages in terms of its stress, strain and rupturing behaviors: (1) strain localization, (2) rupture initiation, (3) rupture acceleration, (4) spontaneous rupture growth and (5) stabilization. This study provides the insight into the further stability estimation of underground tunnel structures under the combined effect of strong earthquakes and faults.
A Study on Seismic Hazard Evaluation at the Nagaoka CO2 Storage Site, Japan
NASA Astrophysics Data System (ADS)
Horikawa, S.
2015-12-01
RITE carried out the first Japanese pilot-scale CO2 sequestration project from July, 2003 to January, 2005 in Nagaoka City.Supercritical CO2 was injected into an onshore saline aquifer at a depth of 1,100m. CO2 was injected at a rate of 10,400 tonnes. 'Mid Niigata Prefecture Earthquake in 2004' (Mw6.6) and 'The Niigataken Chuetsu-oki Earthquake in 2007' (Mw6.6) occurred during the CO2 injection-test and after the completion of injection-test. Japan is one of the world's major countries with frequent earthquakes.This paper presents a result of seismic response analysis, and reports of seismic hazard evaluation of a reservoir and a caprock. In advance of dynamic response analysis, the earthquake motion recorded on the earth surface assumed the horizontally layer model, and set up the input wave from a basement layer by SHAKE ( = One-Dimensional Seismic Response Analysis). This wave was inputted into the analysis model and the equation of motion was solved using the direct integral calculus by Newmark Beta Method. In Seismic Response Analysis, authors have used Multiple Yield Model (MYM, Iwata, et al., 2013), which can respond also to complicated geological structure. The intensity deformation property of the foundation added the offloading characteristic to the composition rule of Duncan-Chang model in consideration of confining stress dependency, and used for and carried out the nonlinear repetition model. And the deformation characteristic which made it depend on confining stress with the cyclic loadings and un-loadings, and combined Mohr-Coulomb's law as a strength characteristic.The maximum dynamic shearing strain of caprock was generated about 1.1E-04 after the end of an earthquake. Although the dynamic safety factor was 1.925 on the beginning, after the end of an earthquake fell 0.05 point. The dynamic safety factor of reservoir fell to 1.20 from 1.29. As a result of CO2 migration monitoring by the seismic cross-hole tomography, CO2 has stopped in the reservoir through two earthquakes till the present after injection, and the leak is not accepted till the present. By the result of seismic response simulation, it turned out that the stability of the foundation is not spoiled after the earthquake.
NASA Astrophysics Data System (ADS)
Julià, Jordi; Schimmel, Martin; Cedraz, Victória
2017-04-01
Reflected-wave interferometry relies on the recording of transient seismic signals from random wavefields located beneath recording stations. Under vertical incidence, the recordings contain the full transmission response, which includes the direct wave as well as multiple reverberations from seismic discontinuities located between the wavefields and the receiver. It has been shown that, under those assumptions, the reflection response of the medium can be recovered from the autocorrelation function (ACF) of the transmission response at a given receiver, as if the wavefields had originated themselves at the free surface. This passive approach to seismic reflection profiling has the obvious advantage of being low-cost and non-invasive when compared to its active-source counterpart, and it has been successfully utilized in other sedimentary basins worldwide. In this paper we evaluate the ability of the autocorrelation of ambient seismic noise recorded in the Parnaíba basin - a large Paleozoic basin in NE Brazil - to recover the reflection response of the basin. The dataset was acquired by the Universidade Federal do Rio Grande do Norte during 2015 and 2016 under the Parnaíba Basin Analysis Project (PBAP), a multi-disciplinary and multi-institutional effort funded by BP Energy do Brasil aimed at improving our current understanding of the architecture of this cratonic basin. The dataset consists of about 1 year of continuous ground motion data from 10 short-period, 3-component stations located in the central portion of the basin. The stations were co-located with an existing (active-source) seismic reflection profile that was shot in 2012, making a linear array of about 100 km in aperture and about 10 km inter-station spacing. To develop the autocorrelation at a given station we considered the vertical component of ground motion only, which should result in the P-wave response. The vertical recordings were first split into 10 min-long windows, demeaned, de-trended, re-sampled, and band-pass filtered between 8 and 16 Hz before autocorrelation, and then stacked with phase-weighting to enhance coherency of the retrieved signal. The ACFs show coherent signal is recovered at lag times between 0.5 and 2 s, which we interpret as P- and S-wave energy reflected on top of an intra-sedimentary discontinuity. Our results are consistent, to first-order, with a previously developed active-source reflection response of the basin.
The behavior of seasonal variations in induced seismicity in the Koyna-Warna region, western India
NASA Astrophysics Data System (ADS)
Smirnov, V. B.; Srinagesh, D.; Ponomarev, A. V.; Chadha, R.; Mikhailov, V. O.; Potanina, M. G.; Kartashov, I. M.; Stroganova, S. M.
2017-07-01
Based on the earthquake catalog data for the Koyna-Warna region of induced seismicity in western India, the seasonal variations in seismic activity associated with annual fluctuations in the reservoir water level are analyzed over the time span of the entire history of seismological observations in this region. The regularities in the time changes in the structure of seasonal variations are revealed. The seasonal seismic activity is minimal in May-June when the reservoir level is lowest. During the remaining part of the year, the activity has three peaks: the fall peak in September, winter peak in November-December, and spring peak in February-March. The first mentioned peak, which falls in the phase of the water level reaching its maximal seasonal value is considered as the immediate response of the fluid saturated medium to the additional loading under the weight of reservoir water. The two subsequent maxima concur with the decline phase in the reservoir level and are interpreted as the delayed response associated with the changes in the properties of the medium due to water diffusion. It is shown that the intensities of the immediate and delayed responses to the seasonal water level variations both vary with time as does their ratio. The probable factors affecting the variations in the intensity of the seasonal components of the reservoir-induced seismicity are discussed.
Advances in Rotational Seismic Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierson, Robert; Laughlin, Darren; Brune, Robert
2016-10-19
Rotational motion is increasingly understood to be a significant part of seismic wave motion. Rotations can be important in earthquake strong motion and in Induced Seismicity Monitoring. Rotational seismic data can also enable shear selectivity and improve wavefield sampling for vertical geophones in 3D surveys, among other applications. However, sensor technology has been a limiting factor to date. The US Department of Energy (DOE) and Applied Technology Associates (ATA) are funding a multi-year project that is now entering Phase 2 to develop and deploy a new generation of rotational sensors for validation of rotational seismic applications. Initial focus is onmore » induced seismicity monitoring, particularly for Enhanced Geothermal Systems (EGS) with fracturing. The sensors employ Magnetohydrodynamic (MHD) principles with broadband response, improved noise floors, robustness, and repeatability. This paper presents a summary of Phase 1 results and Phase 2 status.« less
An Investigation of Seismicity for the West Sumatra Region Indonesia
NASA Astrophysics Data System (ADS)
Syafriani, S.
2018-04-01
The purpose of this research was to investigate the seismicity of the West Sumatra region in the coordinates area of 94° E – 104° E and 2° N - 4° S. Guttenberg-Richer magnitude-frequency relation and seismic risk have been computed. Historical data of earthquakes used from year of 1970 to 2017 with magnitude higher than 4. The study area was divided into 8 sub-regions based on seismotectonic characteristics, plate tectonic and geological models. The determination of seismotectonic characteristics was based on the level of seismic activity in a region (a value) and rock stress condition (b value). High a value was associated with high seismic activity, whereas high b values were associated with low stress rock conditions, and vice versa. Based on the calculation results, a and b values were obtained in the interval of 5.5-11.3 and 0.7-2. The highest b value was obtained in the sub region 5 (Nias islands), while the lowest b value was obtained in sub region 7 (the Mentawai islands). The sub region 7, Mentawai Islands was indicated as the seismic risk potential areas.
1D Seismic reflection technique to increase depth information in surface seismic investigations
NASA Astrophysics Data System (ADS)
Camilletti, Stefano; Fiera, Francesco; Umberto Pacini, Lando; Perini, Massimiliano; Prosperi, Andrea
2017-04-01
1D seismic methods, such as MASW Re.Mi. and HVSR, have been extensively used in engineering investigations, bedrock research, Vs profile and to some extent for hydrologic applications, during the past 20 years. Recent advances in equipment, sound sources and computer interpretation techniques, make 1D seismic methods highly effective in shallow subsoil modeling. Classical 1D seismic surveys allows economical collection of subsurface data however they fail to return accurate information for depths greater than 50 meters. Using a particular acquisition technique it is possible to collect data that can be quickly processed through reflection technique in order to obtain more accurate velocity information in depth. Furthermore, data processing returns a narrow stratigraphic section, alongside the 1D velocity model, where lithological boundaries are represented. This work will show how collect a single-CMP to determine: (1) depth of bedrock; (2) gravel layers in clayey domains; (3) accurate Vs profile. Seismic traces was processed by means a new software developed in collaboration with SARA electronics instruments S.r.l company, Perugia - ITALY. This software has the great advantage of being able to be used directly in the field in order to reduce the times elapsing between acquisition and processing.
Seismic passive earth resistance using modified pseudo-dynamic method
NASA Astrophysics Data System (ADS)
Pain, Anindya; Choudhury, Deepankar; Bhattacharyya, S. K.
2017-04-01
In earthquake prone areas, understanding of the seismic passive earth resistance is very important for the design of different geotechnical earth retaining structures. In this study, the limit equilibrium method is used for estimation of critical seismic passive earth resistance for an inclined wall supporting horizontal cohesionless backfill. A composite failure surface is considered in the present analysis. Seismic forces are computed assuming the backfill soil as a viscoelastic material overlying a rigid stratum and the rigid stratum is subjected to a harmonic shaking. The present method satisfies the boundary conditions. The amplification of acceleration depends on the properties of the backfill soil and on the characteristics of the input motion. The acceleration distribution along the depth of the backfill is found to be nonlinear in nature. The present study shows that the horizontal and vertical acceleration distribution in the backfill soil is not always in-phase for the critical value of the seismic passive earth pressure coefficient. The effect of different parameters on the seismic passive earth pressure is studied in detail. A comparison of the present method with other theories is also presented, which shows the merits of the present study.
Rheological Models in the Time-Domain Modeling of Seismic Motion
NASA Astrophysics Data System (ADS)
Moczo, P.; Kristek, J.
2004-12-01
The time-domain stress-strain relation in a viscoelastic medium has a form of the convolutory integral which is numerically intractable. This was the reason for the oversimplified models of attenuation in the time-domain seismic wave propagation and earthquake motion modeling. In their pioneering work, Day and Minster (1984) showed the way how to convert the integral into numerically tractable differential form in the case of a general viscoelastic modulus. In response to the work by Day and Minster, Emmerich and Korn (1987) suggested using the rheology of their generalized Maxwell body (GMB) while Carcione et al. (1988) suggested using the generalized Zener body (GZB). The viscoelastic moduli of both rheological models have a form of the rational function and thus the differential form of the stress-strain relation is rather easy to obtain. After the papers by Emmerich and Korn and Carcione et al. numerical modelers decided either for the GMB or GZB rheology and developed 'non-communicating' algorithms. In the many following papers the authors using the GMB never commented the GZB rheology and the corresponding algorithms, and the authors using the GZB never related their methods to the GMB rheology and algorithms. We analyze and compare both rheologies and the corresponding incorporations of the realistic attenuation into the time-domain computations. We then focus on the most recent staggered-grid finite-difference modeling, mainly on accounting for the material heterogeneity in the viscoelastic media, and the computational efficiency of the finite-difference algorithms.
NASA Astrophysics Data System (ADS)
Messaoudi, Akila; Laouami, Nasser; Mezouar, Nourredine
2017-07-01
During the May 21, 2003 M w 6.8 Boumerdes earthquake, in the "Cité des 102 Logements" built on a hilltop, in Corso, heavy damages were observed: near the crest, a four-story RC building collapsed while others experienced severe structural damage and far from the crest, slight damage was observed. In the present paper, we perform a 2D slope topography seismic analysis and investigate its effects on the response at the plateau as well as the correlation with the observed damage distribution. A site-specific seismic scenario is used involving seismological, geological, and geotechnical data. 2D finite element numerical seismic study of the idealized Corso site subjected to vertical SV wave propagation is carried out by the universal code FLUSH. The results highlighted the main factors that explain the causes of block collapse, located 8-26 m far from the crest. These are as follows: (i) a significant spatial variation of ground response along the plateau due to the topographic effect, (ii) this spatial variation presents high loss of coherence, (iii) the seismic ground responses (PGA and response spectra) reach their maxima, and (iv) the fundamental frequency of the collapsed blocks coincides with the frequency content of the topographic component. For distances far from the crest where slight damages were observed, the topographic contribution is found negligible. On the basis of these results, it is important to take into account the topographic effect and the induced spatial variability in the seismic design of structures sited near the crest of slope.
NASA Astrophysics Data System (ADS)
Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.
2015-12-01
Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.
Seismic signal processing on heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
2015-04-01
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.
The Shock and Vibration Digest. Volume 18, Number 6
1986-06-01
linear, quadratic, or cubic. Bessel function Reed [124] reported a method for computing solutions were obtained for a truncated pyramid amplitudes of a...86-1198 A. Ragab, Chung C. Fu Seismic Analysis of a Large LMFBR with Flu- Cairo Univ., Giza , Egypt . . *. id-Structure Imteractions Computers Struc
Modeling Regional Seismic Waves
1992-06-29
the computation of the Green’s functions is rather time comsuming . they arc Computed for each of the fundamental faults, at I1(H) km intervals from 21...this record was very, small. Station GEO displays similar behavior in that the overall features of the waveform are matched, but fit in detail is not
Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing
NASA Astrophysics Data System (ADS)
Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.
2007-05-01
The rich oil reserves of the Gulf of Mexico are buried in deep and ultra-deep waters up to 30,000 feet from the surface. Minerals Management Service (MMS), the federal agency in the U.S. Department of the Interior that manages the nation's oil, natural gas and other mineral resources on the outer continental shelf in federal offshore waters, estimates that the Gulf of Mexico holds 37 billion barrels of "undiscovered, conventionally recoverable" oil, which, at 50/barrel, would be worth approximately 1.85 trillion. These reserves are very difficult to find and reach due to the extreme depths. Technological advances in seismic imaging represent an opportunity to overcome this obstacle by providing more accurate models of the subsurface. Among these technological advances, Reverse Time Migration (RTM) yields the best possible images. RTM is based on the solution of the two-way acoustic wave-equation. This technique relies on the velocity model to image turning waves. These turning waves are particularly important to unravel subsalt reservoirs and delineate salt-flanks, a natural trap for oil and gas. Because it relies on an accurate velocity model, RTM opens new frontier in designing better velocity estimation algorithms. RTM has been widely recognized as the next chapter in seismic exploration, as it can overcome the limitations of current migration methods in imaging complex geologic structures that exist in the Gulf of Mexico. The chief impediment to the large-scale, routine deployment of RTM has been a lack of sufficient computer power. RTM needs thirty times the computing power used in exploration today to be commercially viable and widely usable. Therefore, advancing seismic imaging to the next level of precision poses a multi-disciplinary challenge. To overcome these challenges, the Kaleidoscope project, a partnership between Repsol YPF, Barcelona Supercomputing Center, 3DGeo Inc., and IBM brings together the necessary components of modeling, algorithms and the uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.
Rheologic effects of crystal preferred orientation in upper mantle flow near plate boundaries
NASA Astrophysics Data System (ADS)
Blackman, Donna; Castelnau, Olivier; Dawson, Paul; Boyce, Donald
2016-04-01
Observations of anisotropy provide insight into upper mantle processes. Flow-induced mineral alignment provides a link between mantle deformation patterns and seismic anisotropy. Our study focuses on the rheologic effects of crystal preferred orientation (CPO), which develops during mantle flow, in order to assess whether corresponding anisotropic viscosity could significantly impact the pattern of flow. We employ a coupled nonlinear numerical method to link CPO and the flow model via a local viscosity tensor field that quantifies the stress/strain-rate response of a textured mineral aggregate. For a given flow field, the CPO is computed along streamlines using a self-consistent texture model and is then used to update the viscosity tensor field. The new viscosity tensor field defines the local properties for the next flow computation. This iteration produces a coupled nonlinear model for which seismic signatures can be predicted. Results thus far confirm that CPO can impact flow pattern by altering rheology in directionally-dependent ways, particularly in regions of high flow gradient. Multiple iterations run for an initial, linear stress/strain-rate case (power law exponent n=1) converge to a flow field and CPO distribution that are modestly different from the reference, scalar viscosity case. Upwelling rates directly below the spreading axis are slightly reduced and flow is focused somewhat toward the axis. Predicted seismic anisotropy differences are modest. P-wave anisotropy is a few percent greater in the flow 'corner', near the spreading axis, below the lithosphere and extending 40-100 km off axis. Predicted S-wave splitting differences would be below seafloor measurement limits. Calculations with non-linear stress/strain-rate relation, which is more realistic for olivine, indicate that effects are stronger than for the linear case. For n=2-3, the distribution and strength of CPO for the first iteration are greater than for n=1, although the fast seismic axis directions are similar. The greatest difference in CPO for the nonlinear cases develop at the flow 'corner' at depths of 10-30 km and 20-100 km off-axis. J index values up to 10% greater than the linear case are predicted near the lithosphere base in that region. Viscosity tensor components are notably altered in the nonlinear cases. Iterations between the texture and flow calculations for the non-linear cases are underway this winter; results will be reported in the presentation.
NASA Astrophysics Data System (ADS)
Rodgers, A. J.; Pitarka, A.; Wagoner, J. L.; Helmberger, D. V.
2017-12-01
The FLASK underground nuclear explosion (UNE) was conducted in Area 2 of Yucca Flat at the Nevada Test Site on May 26, 1970. The yield was 105 kilotons (DOE/NV-209-Rev 16) and the working point was 529 m below the surface. This test was detonated in faulted Tertiary volcanic rocks of Yucca Flat. Coincidently, the FLASK UNE ground zero (GZ) is close (< 600 m) to the U2ez hole where the Source Physics Experiment will be conducting Phase II of its chemical high explosives test series in the so-called Dry Alluvium Geology (DAG) site. Ground motions from FLASK were recorded by twelve (12) three-component seismic stations in the near-field at ranges 3-4 km. We digitized the paper records and used available metadata on peak particle velocity measurements made at the time to adjust the amplitudes. These waveforms show great variability in amplitudes and waveform complexity with azimuth from the shot, likely due to along propagation path structure such as the geometry of the hard-rock/alluvium contact above the working point. Peak particle velocities at stations in the deeper alluvium to the north, east and south of GZ have larger amplitudes than those to the west where the basement rock is much shallower. Interestingly, the transverse components show a similar trend with azimuth. In fact, the transverse component amplitudes are similar to the other components for many stations overlying deeper basement. In this study, we simulated the seismic response at the available near-field stations using the SW4 three-dimensional (3D) finite difference code. SW4 can simulate seismic wave propagation in 3D inelastic earth structure, including surface topography. SW4 includes vertical mesh refinement which greatly reduces the computational resources needed to run a specific problem. Simulations are performed on high-performance computers with grid spacing as small as 10 meters and resolution to 6 Hz. We are testing various subsurface models to identify the role of 3D structure on path propagation effects from the source. We are also testing 3D models to constrain structure for the upcoming DAG experiments in 2018.
Converting Advances in Seismology into Earthquake Science
NASA Astrophysics Data System (ADS)
Hauksson, Egill; Shearer, Peter; Vidale, John
2004-01-01
Federal and state agencies and university groups all operate seismic networks in California. The U.S. Geological Survey (USGS) operates seismic networks in California in cooperation with the California Institute of Technology (Caltech) in southern California, and the University of California (UC) at Berkeley in northern California. The California Geological Survey (CGS) and the USGS National Strong Motion Program (NSMP) operate dial-out strong motion instruments in the state, primarily to capture data from large earthquakes for earthquake engineering and, more recently, emergency response. The California Governor's Office of Emergency Services (OES) provides leadership for the most recent project, the California Integrated Seismic Network (CISN), to integrate all of the California efforts, and to take advantage of the emergency response capabilities of the seismic networks. The core members of the CISN are Caltech, UC Berkeley, CGS, USGS Menlo Park, and USGS Pasadena (http://www.cisn.org). New seismic instrumentation is in place across southern California, and significant progress has been made in improving instrumentation in northern California. Since 2001, these new field instrumentation efforts, data sharing, and software development for real-time reporting and archiving have been coordinated through the California Integrated Seismic Network (CISN). The CISN is also the California region of the Advanced National Seismic Network (ANSS). In addition, EarthScope deployments of USArray that will begin in early 2004 in California are coordinated with the CISN. The southern and northern California earthquake data centers (SCEDC and NCEDC) have new capabilities that enable seismologists to obtain large volumes of data with only modest effort.
Implementation of Information Technology in the Free Trade Era for Indonesia
1998-06-01
computer usage, had been organized before Thailand, Malaysia , and China. Also, use of computers for crude oil process applications, and marketing and...seismic computing in Pertamina had been installed and in operation ahead of Taiwan, Malaysia , and Brunei. There are many examples of computer usage at...such as: Malaysia , Thailand, USA, China, Germany, and many others. Although IT development is utilized in Indonesia’s development program, it should
Effect of Damping and Yielding on the Seismic Response of 3D Steel Buildings with PMRF
Haldar, Achintya; Rodelo-López, Ramon Eduardo; Bojórquez, Eden
2014-01-01
The effect of viscous damping and yielding, on the reduction of the seismic responses of steel buildings modeled as three-dimensional (3D) complex multidegree of freedom (MDOF) systems, is studied. The reduction produced by damping may be larger or smaller than that of yielding. This reduction can significantly vary from one structural idealization to another and is smaller for global than for local response parameters, which in turn depends on the particular local response parameter. The uncertainty in the estimation is significantly larger for local response parameter and decreases as damping increases. The results show the limitations of the commonly used static equivalent lateral force procedure where local and global response parameters are reduced in the same proportion. It is concluded that estimating the effect of damping and yielding on the seismic response of steel buildings by using simplified models may be a very crude approximation. Moreover, the effect of yielding should be explicitly calculated by using complex 3D MDOF models instead of estimating it in terms of equivalent viscous damping. The findings of this paper are for the particular models used in the study. Much more research is needed to reach more general conclusions. PMID:25097892
Effect of damping and yielding on the seismic response of 3D steel buildings with PMRF.
Reyes-Salazar, Alfredo; Haldar, Achintya; Rodelo-López, Ramon Eduardo; Bojórquez, Eden
2014-01-01
The effect of viscous damping and yielding, on the reduction of the seismic responses of steel buildings modeled as three-dimensional (3D) complex multidegree of freedom (MDOF) systems, is studied. The reduction produced by damping may be larger or smaller than that of yielding. This reduction can significantly vary from one structural idealization to another and is smaller for global than for local response parameters, which in turn depends on the particular local response parameter. The uncertainty in the estimation is significantly larger for local response parameter and decreases as damping increases. The results show the limitations of the commonly used static equivalent lateral force procedure where local and global response parameters are reduced in the same proportion. It is concluded that estimating the effect of damping and yielding on the seismic response of steel buildings by using simplified models may be a very crude approximation. Moreover, the effect of yielding should be explicitly calculated by using complex 3D MDOF models instead of estimating it in terms of equivalent viscous damping. The findings of this paper are for the particular models used in the study. Much more research is needed to reach more general conclusions.
Steep-dip seismic imaging of the shallow San Andreas Fault near Parkfield
Hole, J.A.; Catchings, R.D.; St. Clair, K.C.; Rymer, M.J.; Okaya, D.A.; Carney, B.J.
2001-01-01
Seismic reflection and refraction images illuminate the San Andreas Fault to a depth of 1 kilometer. The prestack depth-migrated reflection image contains near-vertical reflections aligned with the active fault trace. The fault is vertical in the upper 0.5 kilometer, then dips about 70° to the southwest to at least 1 kilometer subsurface. This dip reconciles the difference between the computed locations of earthquakes and the surface fault trace. The seismic velocity cross section shows strong lateral variations. Relatively low velocity (10 to 30%), high electrical conductivity, and low density indicate a 1-kilometer-wide vertical wedge of porous sediment or fractured rock immediately southwest of the active fault trace.
Influence of seismic anisotropy on the cross correlation tensor: numerical investigations
NASA Astrophysics Data System (ADS)
Saade, M.; Montagner, J. P.; Roux, P.; Cupillard, P.; Durand, S.; Brenguier, F.
2015-05-01
Temporal changes in seismic anisotropy can be interpreted as variations in the orientation of cracks in seismogenic zones, and thus as variations in the stress field. Such temporal changes have been observed in seismogenic zones before and after earthquakes, although they are still not well understood. In this study, we investigate the azimuthal polarization of surface waves in anisotropic media with respect to the orientation of anisotropy, from a numerical point of view. This technique is based on the observation of the signature of anisotropy on the nine-component cross-correlation tensor (CCT) computed from seismic ambient noise recorded on pairs of three-component sensors. If noise sources are spatially distributed in a homogeneous medium, the CCT allows the reconstruction of the surface wave Green's tensor between the station pairs. In homogeneous, isotropic medium, four off-diagonal terms of the surface wave Green's tensor are null, but not in anisotropic medium. This technique is applied to three-component synthetic seismograms computed in a transversely isotropic medium with a horizontal symmetry axis, using a spectral element code. The CCT is computed between each pair of stations and then rotated, to approximate the surface wave Green's tensor by minimizing the off-diagonal components. This procedure allows the calculation of the azimuthal variation of quasi-Rayleigh and quasi-Love waves. In an anisotropic medium, in some cases, the azimuth of seismic anisotropy can induce a large variation in the horizontal polarization of surface waves. This variation depends on the relative angle between a pair of stations and the direction of anisotropy, the amplitude of the anisotropy, the frequency band of the signal and the depth of the anisotropic layer.
SSI-bridge : soil bridge interaction during long-duration earthquake motions.
DOT National Transportation Integrated Search
2014-09-01
The seismic response of a complete soil-bridge system during shallow, crustal and subduction zone earthquakes is the topic of this report. Specifically, the effects of earthquake duration on the seismic performance of soil-bridge systems are examined...
Seismic performance of an I-girder to inverted-T bent cap connection.
DOT National Transportation Integrated Search
2011-09-01
This report presents the research conducted as part of an investigation for the California Department of Transportation (Caltrans) regarding the seismic response and overall moment capacity of precast I-girder to inverted-T bent cap bridge connection...
NASA Astrophysics Data System (ADS)
Bragato, P. L.
2017-03-01
According to the historical earthquake catalog of Italy, the country experienced a pulse of seismicity between the 17th century, when the rate of destructive events increased by more than 100%, and the 20th century, characterized by a symmetric decrease. In the present work, I performed a statistical analysis to verify the reliability of such transient, considering different sources of bias and uncertainty, such as completeness and declustering of the catalog, as well as errors on magnitude estimation. I also searched for a confirmation externally to the catalog, analyzing the correlation with the volcanic activity. The similarity is high for the eruptive history of Vesuvius, which agrees on both the main rate changes of the 17th and 20th centuries and on minor variations in the intermediate period. Of general interest, beyond the specific case of Italy, the observed rate changes suggest the existence of large-scale crustal processes taking place within decades and lasting for centuries, responsible for the synchronous activation/deactivation of remote, loosely connected faults in different tectonic domains. Although their origin is still unexplained (I discuss a possible link with the climate changes and the consequent variations of the sea level), their existence and long lasting is critical for seismic hazard computation. In fact, they introduce a hardly predictable time variability that undermines any hypothesis of regularity of the earthquake cycle on individual faults and systems of interconnected faults.
Applied Geophysics Opportunities in the Petroleum Industry
NASA Astrophysics Data System (ADS)
Olgaard, D. L.; Tikku, A.; Roberts, J. C.; Martinez, A.
2012-12-01
Meeting the increasing global demand for energy over the next several decades presents daunting challenges to engineers and scientists, including geoscientists of all disciplines. Many opportunities exist for geophysicists to find and produce oil and gas in a safe, environmentally responsible and affordable manner. Successful oil and gas exploration involves a 'Plates to Pores' approach that integrates multi-scale data from satellites, marine and land seismic and non-seismic field surveys, lab experiments, and even electron microscopy. The petroleum industry is at the forefront of using high performance computing to develop innovative methods to process and analyze large volumes of seismic data and perform realistic numerical modeling, such as finite element fluid flow and rock deformation simulations. Challenging and rewarding jobs in exploration, production and research exist for students with BS/BA, MS and PhD degrees. Geophysics students interested in careers in the petroleum industry should have a broad foundation in science, math and fundamental geosciences at the BS/BA level, as well as mastery of the scientific method, usually gained through thesis work at MS and PhD levels. Field geology or geophysics experience is also valuable. Other personal attributes typical for geoscientists to be successful in industry include a passion for solving complex geoscience problems, the flexibility to work on a variety of assignments throughout a career and skills such as teamwork, communication, integration and leadership. In this presentation we will give examples of research, exploration and production opportunities for geophysicists in petroleum companies and compare and contrast careers in academia vs. industry.
NASA Astrophysics Data System (ADS)
Ren, Hengxin; Huang, Qinghua; Chen, Xiaofei
2018-03-01
We conduct numerical simulations and theoretical analyses to quantitatively study the amplitude decay characteristic of the evanescent electromagnetic (EM) waves, which has been neglected in previous studies on the seismoelectric conversion occurring at a porous-porous interface. Time slice snapshots of seismic and EM wave-fields generated by a vertical single force point source in a two-layer porous model show that evanescent EM waves can be induced at a porous-porous interface. The seismic and EM wave-fields computed for a receiver array located in a vertical line nearby the interface are investigated in detail. In addition to the direct and interface-response radiation EM waves, we identify three groups of coseismic EM fields and evanescent EM waves associated with the direct P, refracted SV-P and direct SV waves, respectively. Thereafter, we derive the mathematical expression of the amplitude decay factor of the evanescent EM waves. This mathematical expression is further validated by our numerical simulations. It turns out the amplitude decay of the evanescent EM waves generated by seismoelectric conversion is greatly dependent on the horizontal wavenumber of seismic waves. It is also found the evanescent EM waves have a higher detectability at a lower frequency range. This work provides a better understanding on the EM wave-fields generated by seismoelectric conversion, which probably will help improve the interpretation of the seismoelectric coupling phenomena associated with natural earthquakes or possibly will inspire some new ideas on the application of the seismoelectric coupling effect.
Probabilistic Tsunami Hazard Analysis
NASA Astrophysics Data System (ADS)
Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.
2006-12-01
The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes thousands of earthquake scenarios. We have carried out preliminary tsunami hazard calculations for different return periods for western North America and Hawaii based on thousands of earthquake scenarios around the Pacific rim and along the coast of North America. We will present tsunami hazard maps for several return periods and also discuss how to use these results for probabilistic inundation and runup mapping. Our knowledge of certain types of tsunami sources is very limited (e.g. submarine landslides), but a probabilistic framework for tsunami hazard evaluation can include even such sources and their uncertainties and present the overall hazard in a meaningful and consistent way.
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Chengping; Ammon, Charles J.; Maceira, Monica
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...
2018-02-14
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Surface uplift and time-dependent seismic hazard due to fluid injection in eastern Texas.
Shirzaei, Manoochehr; Ellsworth, William L; Tiampo, Kristy F; González, Pablo J; Manga, Michael
2016-09-23
Observations that unequivocally link seismicity and wastewater injection are scarce. Here we show that wastewater injection in eastern Texas causes uplift, detectable in radar interferometric data up to >8 kilometers from the wells. Using measurements of uplift, reported injection data, and a poroelastic model, we computed the crustal strain and pore pressure. We infer that an increase of >1 megapascal in pore pressure in rocks with low compressibility triggers earthquakes, including the 4.8-moment magnitude event that occurred on 17 May 2012, the largest earthquake recorded in eastern Texas. Seismic activity increased even while injection rates declined, owing to diffusion of pore pressure from earlier periods with higher injection rates. Induced seismicity potential is suppressed where tight confining formations prevent pore pressure from propagating into crystalline basement rocks. Copyright © 2016, American Association for the Advancement of Science.
Time-frequency domain SNR estimation and its application in seismic data processing
NASA Astrophysics Data System (ADS)
Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen
2014-08-01
Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.